Uploading Big files to Azure Storage from ASP.NET MVC

Posted by: Suprotim Agarwal , on 5/9/2013, in Category Microsoft Azure
Views: 77353
Abstract: This article demonstrates how to do Chunked File Uploads to Azure Storage from an ASP.NET MVC application

In our previous article we saw how we could multiple upload file to Azure Blob Storage emulator. However it had a major limitation of being able to handle files of sizes which is less than the max size handled by IIS (by default 4MB).

Modern browsers like IE10+, Firefox and Chrome have explicit support for the ‘filelist’ and ability to send files ‘Chunked’. This is mainly based on the support for the slice API in browsers. As of the latest IE (v10) and Firefox (v20.0.1) both support slice. However there is a vendor specific fallback for Webkit browsers too (webkitSlice). So we can safely say all modern browsers will support this upload technique.

So let’s build a MVC Sample application that can upload files larger than 4MB to Azure Storage.

 

Pre-Requisites

The application needs and assumes a few things during development and at runtime

1. End user is using a modern browser that supports the ‘slice’ API. Multiple file upload is a bonus, though most modern browsers support them together.

2. You either have access to Azure Storage in the cloud or have the Storage Emulator going locally. The application will demo with the Emulator.

3. You have the Azure Storage client APIs installed and handy.

For point 2 and 3 you can refer to our previous article on how to get it going.

The Overall Approach

To upload a file in chunks, we have to send it from the client in parts and this is where the slice API comes into picture. The overall algorithm is like this:chunked-file-upload-process

As we can see above, the sequence of events is as follows:

1. User selects one or more Files using the File Upload HTML element.

2. User Initiates upload

3. The client decides on how many chunks and size for each and sends this information to server as metadata.

4. Server saves the Metadata in session and returns an Acknowledgement. If the acknowledgement is not received, then the upload is aborted.

5. Client begins upload one chunk at a time.

6. Once server receives the chunk, it uploads the chunk to the Blob Storage using the Metadata to identify the blob and then sends back an acknowledgement.

7. Once an acknowledgement is received, the client pushes the next chunk of the file through and updates the UI.

8. If there is a timeout, the client retries the same chunk again. Step 7 and 8 continue till there are no more chunks left or maximum number of retries have been exhausted.

9. If all chunks are uploaded, Success is declared on the UI. Else the upload is aborted and a failure message is updated on the UI.

Implementation

We start off with a new MVC4 project using .NET Framework 4.5 and the Internet Template. We’ll name our solution ‘AzureBlobChunkedFileUpload’.

We need the references to Azure Storage Client libraries so using Nuget we can download the ‘Azure Storage’ package. From the Package Manager Console it would be

PM> install-package WindowsAzure.Storage

Once the package installs our pre-requisites are all set. We can jump into the Model now.

The Model

We’ll have two entities in our Model, CloudFilesModel and CloudFile.

The CloudFilesModel entity

This encapsulates the list of files in our Blob Storage and is used by the Index view that lists all the files in the Storage.

public CloudFilesModel()
            : this(null)
{
Files = new List<CloudFile>();
}
public CloudFilesModel(IEnumerable<IListBlobItem> list)
{
Files = new List<CloudFile>();
if (list != null && list.Count<IListBlobItem>() > 0)
{
  foreach (var item in list)
  {
   CloudFile info = CloudFile.CreateFromIListBlobItem(item);
   if (info != null)
   {
    Files.Add(info);
   }
  }
}
}
public List<CloudFile> Files { get; set; }

The CloudFile entity

This represents each file that we upload to Blob storage. However, unlike our previous article, this time each file has some meta information associated with it that helps upload it in chunks. So our CloudFile entity gets a few more properties to manage chunked upload.

public class CloudFile
{
public string FileName { get; set; }
public string URL { get; set; }
public long Size { get; set; }
public long BlockCount { get; set; }
public CloudBlockBlob BlockBlob { get; set; }
public DateTime StartTime { get; set; }
public string UploadStatusMessage { get; set; }
public bool IsUploadCompleted { get; set; }
public static CloudFile CreateFromIListBlobItem(IListBlobItem item)
{
  if (item is CloudBlockBlob)
  {
   var blob = (CloudBlockBlob)item;
   return new CloudFile
   {
    FileName = blob.Name,
    URL = blob.Uri.ToString(),
    Size = blob.Properties.Length
   };
  }
  return null;
}
}

The Controller

We will use the HomeController to implement the Upload action. But before we can upload the file, we need to accept the file’s metadata. Since the metadata will be used across multiple POSTs from the client, we will have to make sure it is available across postbacks. For the sake of this example, we’ll store it in Session. You may choose a more appropriate backing store for the metadata.

Accepting Metadata

The SetMetadata Action method in the HomeController sets up the Meta Information about the file. As we can see in the code below, this includes the file name, its size, number of blocks it’s being broken into and the reference for it in the Storage Blob. It also stores the starting time and sets the IsUploadComplete flag to false. Once done, it sends back a Json with the value ‘true’ indicating Metadata was accepted and upload can commence.

[HttpPost]
public ActionResult SetMetadata(int blocksCount, string fileName, long fileSize)
{
var container = CloudStorageAccount.Parse(
ConfigurationManager.AppSettings["ConfigurationSectionKey"])
  .CreateCloudBlobClient()
  .GetContainerReference(  
    ConfigurationManager.AppSettings["CloudStorageContainerReference"]);
container.CreateIfNotExists();
var fileToUpload = new CloudFile()
{
  BlockCount = blocksCount,
  FileName = fileName,
  Size = fileSize,
  BlockBlob = container.GetBlockBlobReference(fileName),
  StartTime = DateTime.Now,
  IsUploadCompleted = false,
  UploadStatusMessage = string.Empty
};
Session.Add("CurrentFile", fileToUpload);
return Json(true);
}

Accepting Chunked File

The chunked upload Action method receives the Chunk or Block number from the client along with the Slice of file that needs to be uploaded in the Request. In the controller, we retrieve the block of file that needs to be uploaded and send it off to the UploadCurrentChunk method.

In the UploadCurrentChunk method, we use the BlockBlob reference that we initialized when we created the Metadata object to upload the chunked byte stream as a Block using the PutBlock method. Note we set a RetryPolicy to a linear retry after 10 seconds, with total of three retries before it errors out. On error/exception, we send back a JsonResult with the exception details that is passed back to the Client.

private JsonResult UploadCurrentChunk(CloudFile model, byte[] chunk, int id)
{
using (var chunkStream = new MemoryStream(chunk))
{
  var blockId = id.ToString();
  try
  {
   model.BlockBlob.PutBlock(
    blockId,
    chunkStream, null, null,
    new BlobRequestOptions()
    {
     RetryPolicy = new LinearRetry(TimeSpan.FromSeconds(10), 3)
    },
    null);
    return null;
   }
   catch (StorageException e)
   {
    Session.Clear();
    model.IsUploadCompleted = true;
    model.UploadStatusMessage = "Failed to Upload file. Exception - "
     + e.Message;
    return Json(new {
     error = true,
     isLastBlock = false,
     message = model.UploadStatusMessage
    });
  }
}
}

Once the last chunk has been uploaded, we have to move all the separate block into one contiguous block. This is done by the CommitChunks method. It creates an Enumerable of all the Blocks that is put in the current BlockBlob and uses the PutBlockList method to dump them into one Blob just completing the file upload.

Once the upload is complete, it updates the Metadata model with the success message and time taken to upload the file. Finally it sends back a JSON Result with the success message, that is bubbled all the way back to the Client.

private ActionResult CommitAllChunks(CloudFile model)
{
model.IsUploadCompleted = true;
bool errorInOperation = false;
try
{
  var blockList = Enumerable.Range(1,
   (int)model.BlockCount).ToList<int>().ConvertAll(
    converter => converter.ToString());
  model.BlockBlob.PutBlockList(blockList);
  var duration = DateTime.Now - model.StartTime;
  float fileSizeInKb = model.Size / 1024;
  string fileSizeMessage = fileSizeInKb > 1024 ?
    string.Concat((fileSizeInKb / 1024).ToString(CultureInfo.CurrentCulture),
    " MB") :
  string.Concat(fileSizeInKb.ToString(CultureInfo.CurrentCulture), " KB");
  model.UploadStatusMessage = string.Format(CultureInfo.CurrentCulture,
    "File uploaded successfully. {0} took {1} seconds to upload",
    fileSizeMessage, duration.TotalSeconds);
}
catch (StorageException e)
{
  model.UploadStatusMessage = "Failed to Upload file. Exception - " + e.Message;
  errorInOperation = true;
}
finally
{
  Session.Clear();
}
return Json(new
{
  error = errorInOperation,
  isLastBlock = model.IsUploadCompleted,
  message = model.UploadStatusMessage
});
}

Sending List of Files in Blob Storage

So far we’ve seen how to implement the file upload. Once uploaded, we should be able to see the list of files in our Blob storage. We’ll return the list of files in the Index Action method.

The code is the same as we had in our previous article – we create a storage account instance using the connection string, create a client instance and retrieve the container using the client. Once we have the container, we build a list of files in the blob in and put it in list of CloudFiles for a CloudFilesModel object. Finally we return the CloudFilesModel.

public ActionResult Index()
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
  CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient storageClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer storageContainer = storageClient.GetContainerReference(
  ConfigurationManager.AppSettings.Get("CloudStorageContainerReference"));
CloudFilesModel blobsList = new
  CloudFilesModel(storageContainer.ListBlobs(useFlatBlobListing: true));
return View(blobsList);
}

The Client Side

So far we have seen what it takes to accept chunks of a file and send it off to the Blob Storage. Now comes the final and key part of this entire process, the client that will send the file in chunks.

The view has an HTML Input Element with the ID selectFile for selecting the files and a fileUpload button for uploading the files. We’ll use the jQuery UI Progress bar plugin to show the upload progress. For a change, this is going to be a deterministic progress bar because we know exactly how many chunks are there in total and how many have been uploaded. We’ll assign the div with ID progressBar to the jQuery UI plugin.

When the User clicks the fileUpload button, execute the beginUpload method where we calculate the number of chunks for a chunk size of 1MB and send the Metadata to the server. The uploadMetaData method does this for us. Once the server returns successfully, we call invoke the sendFile that determines the number of chunks for the given file and call the sendChunk function. This method slices up the file and sends the first chunk. If it returns successfully, we send the next chunk by calling the sendChunk recursively, till all the chunks have been sent. If any of the chunk fails, we set a retry after 10 seconds for the failed chunk.

As each chunk returns successfully, we update the progress bar with the new value and set a new status message. We add this entire code in a new JavaScript file called chunked-uploader.js

var maxRetries = 3;
var blockLength = 1048576;
var numberOfBlocks = 1;
var currentChunk = 1;
var retryAfterSeconds = 3;

$(document).ready(function ()
{
$(document).on("click", "#fileUpload", beginUpload);
$("#progressBar").progressbar(0);
});

var beginUpload = function ()
{
var fileControl = document.getElementById("selectFile");
if (fileControl.files.length > 0)
{
  for (var i = 0; i < fileControl.files.length; i++)
  {
   uploadMetaData(fileControl.files[i], i);
    }
}
}

var uploadMetaData = function (file, index)
{
var size = file.size;
numberOfBlocks = Math.ceil(file.size / blockLength);
var name = file.name;
currentChunk = 1;
$.ajax({
  type: "POST",
  async: false,
  url: "/Home/SetMetadata?blocksCount=" + numberOfBlocks +
       "&fileName=" + name + "&fileSize=" + size,
}).done(function (state)
{
  if (state === true)
  {
   displayStatusMessage("Starting Upload");
   sendFile(file, blockLength);
  }
}).fail(function ()
{
  displayStatusMessage("Failed to send MetaData");
});
}

var sendFile = function (file, chunkSize)
{
var start = 0,
  end = Math.min(chunkSize, file.size),
  retryCount = 0,
  sendNextChunk, fileChunk;
  displayStatusMessage("");
  sendNextChunk = function ()
  {
   fileChunk = new FormData();
   if (file.slice)
   {
    fileChunk.append('Slice', file.slice(start, end));
   }
   else if (file.webkitSlice)
   {
    fileChunk.append('Slice', file.webkitSlice(start, end));
   }
   else if (file.mozSlice)
   {
    fileChunk.append('Slice', file.mozSlice(start, end));
   }
   else
   {
    displayStatusMessage(operationType.UNSUPPORTED_BROWSER);
    return;
   }
   jqxhr = $.ajax({
    async: true,
    url: ('/Home/UploadChunk?id=' + currentChunk),
    data: fileChunk,
    cache: false,
    contentType: false,
    processData: false,
    type: 'POST'
   }
   ).fail(function (request, error)
   {
    if (error !== 'abort' && retryCount < maxRetries)
    {
     ++retryCount;
     setTimeout(sendNextChunk, retryAfterSeconds * 1000);
    }
    if (error === 'abort')
    {
     displayStatusMessage("Aborted");
    }
    else
    {
     if (retryCount === maxRetries)
     {
      displayStatusMessage("Upload timed out.");
      resetControls();
     }
     else
     {
      displayStatusMessage("Resuming Upload");
     }
    }
    return;
   }).done(function (notice)
   {
    if (notice.error || notice.isLastBlock)
    {
     displayStatusMessage(notice.message);
     return;
    }
    ++currentChunk;
    start = (currentChunk - 1) * blockLength;
    end = Math.min(currentChunk * blockLength, file.size);
    retryCount = 0;
    updateProgress();
    if (currentChunk <= numberOfBlocks)
    {
     sendNextChunk();
    }
   });
  }
  sendNextChunk();
}

var displayStatusMessage = function (message)
{
    $("#statusMessage").text(message);
}

var updateProgress = function ()
{
    var progress = currentChunk / numberOfBlocks * 100;
    if (progress <= 100)
    {
        $("#progressBar").progressbar("option", "value", parseInt(progress));
        displayStatusMessage("Uploaded " + progress + "%");
    }

}

The Views

We’ll have two Views, the Index view will show the list of files uploaded and the Upload View to actually upload the files.

The Index View

We clean up the Index view by removing the boiler plate stuff and update the UI to loop through the CloudFile instances in the CloudFilesModel object.

The markup for it is as follows:

@section featured {
<section class="featured">
  <div class="content-wrapper">
   <hgroup class="title">
    <h1>@ViewBag.Title.</h1>
    <h2>@ViewBag.Message</h2>
   </hgroup>
   <p>
    For more .NET and MVC Tutorials visit
    <a href="
https://www.dotnetcurry.com" title="DotNetCurry.com">https://www.dotnetcurry.com</a>.
   </p>
  </div>
</section>
}
<h3>Current Files</h3>
<ul>
@foreach (var item in Model.Files)
{
  <li>
   <a href="@item.URL">@item.FileName</a> (@item.Size bytes)
  </li>
}
</ul>
@Html.ActionLink("Upload Another File", "UploadFile")

At the bottom of the page, we have an action link that takes us to the UploadFile view that does the actual file upload.

The Upload View

The Upload View is also very simple for the demo. As mentioned above, it has the selectFile input for selecting a file that needs to be uploaded. It has a fileUpload button that initiates the Submit process. It has a div with the id progressBar that will be used by the jQuery UI plugin to show progress.

Next there is the statusMessage label to show the upload progress or any error message that may occur.

Last but not least there is the Script reference to our custom chunked-uploader.js file

@{
    ViewBag.Title = "UploadFile";
}

<h1>Upload BIG File to Azure Blob Storage</h1>

<input type="file" id="selectFile" name="selectFile" />
<input type="submit" name="fileUpload" id="fileUpload" value="Upload" />
<br />
<div id="progressBar" style="width:50%; height:20px; background-color:grey"></div>
<br />
<label id="statusMessage"></label>

@section Scripts{
    <script src="~/Scripts/chunked-uploader.js"></script>
}

The Demo

We are now all set, let’s run the application and see how it goes.

Step 1: Start the Azure Emulator from the Azure Command Prompt or you can navigate to the Azure SDK folder. My installation is here

C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\2012-10

The command to start the emulator is csrun /devstore

C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\2012-10\csrun /devstore

Step 2: Run the application, it navigates to the Index page by default. The page will be empty because our blob container is empty.

index-page-first-time

Step 3: Click on Upload Another File to navigate to the file upload page.

upload-file-begin

Step 4: Click on Browse and select a file > 4Mb in size. I’ve selected a PDF that’s about 15MB. Click on Upload to begin the Upload. You will see the progress bar updates itself and the ‘Upload %’ label changes as the file chunks are uploaded.

upload-in-progress

Step 5: Once the upload is complete, you’ll get the completed dialog

upload-complete

Step 6: Click on the header to navigate back to the home page. You’ll see the file has been uploaded correctly.

image

To verify you can click on the link to download the file. If you get a 404, it means your blob doesn’t have external access permissions. In Visual Studio, open the DB Explorer, expand the Azure Storage node till you get to your Blob. Hit, F4 to bring up the property panel and select Pubic access level to Blob.

external-access-properties

Now click on the link to access the file.

Conclusion

Some of you may have noted, that the demo selected only one file. That’s because the current code as is, is suitable for single file upload only. However, it can be easily extended to allow multiple uploads in parallel. When doing multiple uploads, we have to send additional information to the server about which file and which chunk is being sent. It requires creation of a proper ViewModel in JavaScript.

Update: The next part of this article shows how to Upload Multiple Files in Chunks from ASP.NET MVC app to Azure Blob Storage

Overall, we saw how we could upload one big file to Azure without having to worry about IIS upload limits.

Download the entire source code of this article (Github)

This article has been editorially reviewed by Suprotim Agarwal.

Absolutely Awesome Book on C# and .NET

C# and .NET have been around for a very long time, but their constant growth means there’s always more to learn.

We at DotNetCurry are very excited to announce The Absolutely Awesome Book on C# and .NET. This is a 500 pages concise technical eBook available in PDF, ePub (iPad), and Mobi (Kindle).

Organized around concepts, this Book aims to provide a concise, yet solid foundation in C# and .NET, covering C# 6.0, C# 7.0 and .NET Core, with chapters on the latest .NET Core 3.0, .NET Standard and C# 8.0 (final release) too. Use these concepts to deepen your existing knowledge of C# and .NET, to have a solid grasp of the latest in C# and .NET OR to crack your next .NET Interview.

Click here to Explore the Table of Contents or Download Sample Chapters!

What Others Are Reading!
Was this article worth reading? Share it with fellow developers too. Thanks!
Share on LinkedIn
Share on Google+

Author
Suprotim Agarwal, MCSD, MCAD, MCDBA, MCSE, is the founder of DotNetCurry, DNC Magazine for Developers, SQLServerCurry and DevCurry. He has also authored a couple of books 51 Recipes using jQuery with ASP.NET Controls and The Absolutely Awesome jQuery CookBook.

Suprotim has received the prestigious Microsoft MVP award for Sixteen consecutive years. In a professional capacity, he is the CEO of A2Z Knowledge Visuals Pvt Ltd, a digital group that offers Digital Marketing and Branding services to businesses, both in a start-up and enterprise environment.

Get in touch with him on Twitter @suprotimagarwal or at LinkedIn



Page copy protected against web site content infringement 	by Copyscape




Feedback - Leave us some adulation, criticism and everything in between!
Comment posted by ChristopheH on Wednesday, June 5, 2013 10:45 AM
Hi,
many comments about this tutorial i've just followed :
1. In your controller, you just "forgot" to talk about the UploadChunk method. We have to copy directly your code from your Git repo to have a correct code.
2. When you try to upload a chunk with the PutBlock(...) method, the server returns a 400 Bad Request. It's because the BlockIDs have to be encoded in Base64. You could do this with [UploadCurrentChunk method] var blockId = Convert.ToBase64String(BitConverter.GetBytes(id)); and [CommitAllChunks method]var blockList = Enumerable.Range(1, (int)model.BlockCount).
                    ToList<int>().
                    ConvertAll(converter => Convert.ToBase64String(BitConverter.GetBytes(converter)));

3. In your javascript, you made few mistakes (that you corrected in your Git repo). For example, the first instruction is $document.ready instead od your $this.ready.
The reader should copy the file directly from your Git repo.

However, this was a great tutorial and helped me very much! Thank you!
Comment posted by Vipin on Monday, July 8, 2013 10:05 PM
m getting error when trying to upload files that is "Failed to send MetaData"... can you plz tell why m getting this error
Comment posted by Vipin on Monday, July 8, 2013 10:53 PM
m getting error when trying to upload files that is "Failed to send MetaData"... can you plz tell why m getting this error
Comment posted by Sumit on Wednesday, July 31, 2013 1:23 PM
Hi Vipin,
The JavaScript is posting data to /Home/SetMetadata where Home is the HomeController and SetMetadata is the Action method. If your controller or action method is called something else, update the JavaScript appropriately.
-Sumit.
Comment posted by Serdar on Wednesday, August 7, 2013 6:03 AM
Hi,

Thanks for the document. I have same error with Vipin, "Failed to send MetaData". Path location seems correct but I can not pass this part. Does it be a permission issue?

Thanks.
Comment posted by Luke on Friday, September 6, 2013 9:02 PM
Excellent article! We adapted this for a WebForms project and included a non supported browser fallback - it worked a treat.

Many thanks!
Comment posted by Richard on Monday, October 14, 2013 5:06 AM
Very good post !! I'm really happy with this.
Comment posted by JY on Thursday, July 10, 2014 4:07 PM
Thank you for sharing!
Comment posted by Norm on Thursday, July 17, 2014 11:03 AM
Thank you for this excellent code sample and article.  FYI, it looks like the CreateIfNotExists method in HomeController has been deprecated.  After commenting out and manually creating my blob container, it worked.
Comment posted by Suprotim Agarwal on Saturday, July 19, 2014 5:32 AM
@Norm: Thanks for sharing that info!
Comment posted by Mikola on Monday, October 6, 2014 11:12 PM
Hey all,
Failed to send metadata is caused by the js pointing to the wrong controller.
Comment posted by Simon on Friday, October 10, 2014 7:29 AM
Hi there,

I have the code working from the sample code. However, if I change all the references to be another controller name, and remove the Home controller, I am getting the "Failed to send MetaData" error.

Can you please describe what references need to be changed if I am using a different controller name other than Home?

Thanks in advance.
Comment posted by Fernando Nunes on Tuesday, November 18, 2014 11:17 AM
Hi,

Very useful, great post.
I modified some stuff to correct the error above reported, please try to make lowercase on the container name and the error about SetMetadata will disappear, assuming that authentication and connection string to storage container is correct.

Very good work,

Many thanks