User's Guide

This guide shows Java examples of HFS client usage:

  • how to configure HFS client;
  • how to upload file(s) to HFS server;
  • how to download stored file from HFS server;
  • how to manage uploaded files lifecycle.

HFS client API

Benefits for you from HFS API usage:

  • your application DOES NOT handle any streaming operations;
  • your application DOES NOT manage uploaded files;
  • your application controls lifecycle, ownership and access to uploaded files.

Constrains from HFS API to your application:

  • your application still have to store file's information (ID, name, owner, etc...)
  • your application should provide GC integration
/** HFS client allows you to upload, publish and manage files stored on HFS server. */
public interface HfsClient {
    /**
     * Creates an absolute uri to upload static file to the file server.
     *
     * @param format Format of the HFS server response after upload process completes, e.g.: <ul>
     *     <li>{@code null} - to send result token in the HTTP response body;</li>
     *     <li>{@code URI} - to send HTTP redirect with result token in the query string.</li>
     * </ul> For more details and options wee HFS server specification.
     * @return An absolute uri to upload static file to the file server.
     */
    String createUploadUri(@Nullable String format);

    /**
     * Creates an absolute uri to publish static file with specified ID.
     *
     * @param fileId   File ID to publish.
     * @param fileName File name to publish (uri suffix) or {@code null} to skip file name setting.
     * @return An absolute public uri for specified file ID.
     */
    String createPublishUri(UUID fileId, @Nullable String fileName);

    /**
     * Parse uploaded files information from the requested upload callback token.
     *
     * @param token Requested upload callback token.
     * @return Uploaded files information from the requested upload callback token.
     * @throws IllegalArgumentException If token is formed incorrectly or cannot be parsed.
     */
    HfsUploadInfo parseUploadInfo(String token) throws IllegalArgumentException;

    /**
     * Parse garbage collector request from the requested GC token.
     *
     * @param token Requested GC token.
     * @return Garbage collector request from the requested GC token.
     * @throws IllegalArgumentException If token is formed incorrectly or cannot be parsed.
     */
    HfsGcRequest parseGcRequest(String token) throws IllegalArgumentException;

    /**
     * Creates a UUIDs writer to send alive file IDs response from your GC driver implementation
     * to HFS server. See details in {@link HfsGcWriter} class.
     *
     * @param channel Channel to write binary garbage collector response to.
     * @return UUIDs writer to send alive file IDs response from you GC driver implementation to HFS server.
     */
    HfsGcWriter createGcWriter(WritableByteChannel channel);
}

Configure client

Java client is configured with hfs.xml file in the classpath root, using the same approach as log4j or logback libraries.

In default maven project layout this configuration should be placed into src/main/resources/hfs.xml.

<?xml version="1.0" encoding="UTF-8"?>
<hfs xmlns="http://www.ursaj.com/schema/hfs/client"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="
        http://www.ursaj.com/schema/hfs/client’
        http://www.ursaj.com/schema/hfs/client/hfs-client-1.0.xsd">

    <!--
      - Definitions of restriction profiles to use in your application with
      - HfsFactory.getClient("[<profile ID>@]<client ID>")
      -->
    <profile id="example" expirationDelay="20m" maxUploadSize="10m" maxFileSize="10m"/>

    <!--
      - Definitions of HFS clients to use in your application with
      - HfsFactory.getClient("[<profile ID>@]<client ID>")
      -->
    <client id="localhost" uri="http://localhost:9090">
        <cipher algorithm="MD5" privateKey="s3cret"/>
    </client>
</hfs>

Then you can create an instance of HfsClient object:

// Create an instance of HfsClient object with default profile.
HfsClient client1 = HfsFactory.getClient("localhost");

// Create an instance of HfsClient object with 'example' profile.
HfsClient client2 = HfsFactory.getClient("[email protected]");

Upload file

Files upload works in the following sequence:

  • user receives upload form, generated by your application;
  • user uploads files to HFS server;
  • user-agent sends an upload report to your application (via HTTP redirect).
// Provide absolute URI for your handler to store uploaded files information in.
// In servlets container absolute URI can be calculated from HttpServletRequest object.
String callbackUri = "http://localhost/?token=";

// Create an instance of HfsClient object (or re-use existent/shared one).
HfsClient client = HfsFactory.getClient("localhost");

// Create an absolute URI to upload files to.
String uploadUri = client.createUploadUri(callbackUri);

Then use created upload URI in HTML form.

<form method="POST" enctype="multipart/form-data"
      action="http://localhost:9090/lgDEq-bojQ0aIQiUy9frlCkSGGh0dHA6Ly9sb2NhbGhvc3QvP3Rva2VuPQ">
    <input type="file" name="file[]"/>
    <input type="submit" value="Upload"/>
</form>

After upload completes HFS server will redirect user's agent to callback URI with upload report.

HTTP Response Headers
Location: http://localhost/?token=jXkB5Av9VP0qEAoOCMwBEglObyBmaWxlcy4

User's agent delivers to your handler an information about uploaded files.

// Create an instance of HfsClient object (or re-use existent/shared one).
HfsClient client = HfsFactory.getClient("localhost");

// Parse & validate upload report from HFS server.
HfsUploadInfo uploadInfo = client.parseUploadInfo(req.getParameter("token"));

// Choose your action for received upload report.
switch (uploadInfo.getStatus()) {
    case SUCCEED:
        // Store uploaded files information: ID, size, content-type, name, etc...
        fileService.storeUploadedFiles(uploadInfo.getFiles(), userService.currentUser());
        break;
    case ... : // handle other codes, if you need them
        break;
    default:
        throw new IllegalStateException("Unsupported upload status: " + uploadInfo.getStatus()); 
}

Download file

// Provide ID and displayable name of downloaded file.
UUID fileId = ...;
String fileName = null; // Can be omitted.

// Create an instance of HfsClient object (or re-use existent/shared one).
HfsClient client = HfsFactory.getClient("localhost");

// Create an absolute URI to downloaded file.
String fileUri = client.createPublishUri(fileId, fileName);

Then use created URI in HTML.

<p>Download file: <a href="http://localhost:9090/_S2AjUzBgOoSHwigoNjplikSFgiPlP2bhZzm_NABEObqm-yzpo78kwE/report.xls">report.xls</a>.</p>

Link will expire within expirationDelay from creation time as configured in the HFS client profile.

GC integration

HFS server manages files on garbage collection principle:

  • keeps alive files;
  • removes all not-alive files.

Periodically HFS server will ask your application for alive files, if configured in HFS store:

<!-- Garbage collector cleanup URI. -->
<property name="cleanupUri" value="http://application.uri/cleanup-servlet?token="/>

<!-- Minimum interval between requests to cleanup URI (milliseconds). By default 30 minutes. -->
<property name="cleanupInterval" value="1800000"/>

You should implement com.ursaj.hfs.gc.HfsGcDriver interface and handle GC request with it:

@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) ... {
    // Parse GC request information from the provided token.
    HfsGcRequest gcReq = hfsClient.parseGcRequest(req.getParameter("token"));

    // Resolve (or initialize) your GC driver implementation.
    HfsGcDriver gcDriver = new HfsEraseGcDriver();

    // Render garbage collector response in format compatible with HFS server.
    resp.setHeader("Content-Type", "application/octet-stream");
    HfsGcWriter writer = hfsClient.createGcWriter(Channels.newChannel(resp.getOutputStream()));

    try {
        // Collect and handle alive file IDs for specified GC request.
        gcDriver.collectAliveFiles(gcReq.getFromId(), writer);
        writer.close();
        writer = null;
    }
    finally {
        HfsUtils.closeSilently(writer);
    }
}

Read more