Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

On the webserver I decided to simply use provided Jetty to start things up. There I only edited the CloudDrive-master/src/web_clouddrive/src/main/props/production.default.props file to connect to the MySQL server on the other VM, packaged it with "./sbt package" and copied the resulting .war file to CloudDrive-master/binaries/website/root.war.

Issuing ./sbt update and then ./sbt ~jetty-run I was able to start the website. You can reach it at http://clouddrivewebserver1.du1.cesnet.cz:8080


Problems/Solutions:

1)

Both installations have the same problem with serving the jquery.js file. When I am logged in the website and I am at the root of my home directory view, I can click the buttons to create new folder or upload a file with no problems (buttons have javascript onclick listeners on them). The URL of jquery.js file is http://clouddrivewebserver1.du1.cesnet.cz:8080/jquery.js. The URL of the webpage is http://clouddrivewebserver1.du1.cesnet.cz:8080/webdrive .

But when I descend into a subdirectory, say /test/ and I try to create new folder, nothing happens. Back when I check the Jetty log I can see the 404 error and a Java stack trace of NullPointerException (can be seen in included file). My browser is now trying to access the URL http://clouddrivewebserver1.du1.cesnet.cz:8080/webdrive/jquery.js while the URL of the webpage is http://clouddrivewebserver1.du1.cesnet.cz:8080/webdrive/test .

Solution:

When I was using the website and I was at the root view of my directories, my browser (Chromium 20) tried to access the jquery library at example.org/jquery.js
When I descended into a directory, it tried to access jquery at example.org/webdrive/jquery.js.
And when I went into a directory inside the previous directory it tried to access it at example.org/webdrive/test/jquery.js
The directories are structured as /test/nexttest/.
It seemed the URL of jquery.js was relative to the webpage I was accessing. When I looked into the HTML code that was the case:

<script type="text/javascript" src="jquery.js" id="jquery"></script>

When I edited the template at CloudDrive-master/src/web_clouddrive/src/main/webapp/templates-hidden/ and changed it to "/jquery.js" and then redeployed the webapp it started working.

 

2)

The second problem is with multi-VM setup and downloading files. When I try to download a file via the website in the single-VM setup, it works and I am able to successfully download the file. But when I try to download a file via the website from the multi-VM setup, I get an error webpage and then when I check the folder, the file I tried to download is gone. Debug logging from the Jetty of the failed download attempt can be seen in the atteched file.

...

filesystem_prefix = /cloud/data/

...

The thing is, I have this setup as a directory on the local disk of my App VM as that is the place where I store the blobs (I upload files via cadaver which connects directly to App VM). I can see the files being there:

...

-rw-r--r-- 1 cloud cloud  784 Nov 26 14:47 d9b19d10-1608-45fc-bdb4-5cf841660d75

...

Then on the webserver I can see the directory is created but obviously it is empty:

...

Do I understand correctly, that I need to have the filesystem with binary blobs of stored files mounted even on the webserver nodes ? I would imagine the WebDAV daemon nodes having the need to see the filesystem as they process the files.

Solution:

The reason you're not seeing the data on you web server is that the actual data needs to be on a shared filesystem between the instances.
The easiest way to think of it is by keeping in mind a few things:
1) the metadata is in Voldemort and gives a (hopefully) consistent view of everything without visiting the actual file-store. This speeds up and provides an extra layer of resilience in many ways (from fault tolerance to ownership of encryption keys).
2) the actual data is in just one place. If you'd swap the local filesystem for S3 you see why this has to work that way. Voldemort simply gives one pointer to the file location. If the underlying filesystem is not available on the web server under the same mount point things will fail. Same goes when testing multiple WebDAV instances. 
3) there are multiple solutions to 2). 
  • use a shared filesystem between instances. From NFS upwards, anything goes.
  • use S3 
  • write your own filesystem interface in Scala mimicking the S3 or local filesystem driver but doing exactly what you want.

The setup needs to be changed by using NFS shares.