Skip to content

Azure Storage Services

Scott Duffy Lecture 9 AZ-104

Overview

Every Azure Storage account provides access to four distinct data services, each designed for a different category of storage workload. Understanding which service fits which scenario is critical for the AZ-104 exam and for real-world architecture decisions.

The four services are:

  1. Blob Storage -- Object storage for unstructured data (files, images, videos, backups)
  2. Azure Files -- Fully managed file shares with SMB and NFS protocol support
  3. Queue Storage -- Simple message queuing for asynchronous processing between components
  4. Table Storage -- NoSQL key-value store for structured, non-relational data

Exam Tip

Expect scenario-based questions that describe a workload and ask you to choose the correct storage service. The key differentiators are: protocol (REST vs SMB/NFS), data structure (flat vs hierarchical vs key-value vs messages), and access pattern (random read/write vs append-only vs FIFO).


Storage Account Service Architecture

txt
direction: down

sa: Azure Storage Account {
  style.fill: "#e8f4f8"
  style.stroke: "#333"
  label: "Storage Account\nhttps://{account}.*.core.windows.net"

  blob: Blob Storage {
    style.fill: "#4a9"
    style.font-color: "#fff"
    label: "Blob Storage\nhttps://{account}.blob.core.windows.net/{container}/{blob}"
  }

  files: Azure Files {
    style.fill: "#38a"
    style.font-color: "#fff"
    label: "Azure Files\nhttps://{account}.file.core.windows.net/{share}/{directory}/{file}"
  }

  queue: Queue Storage {
    style.fill: "#a63"
    style.font-color: "#fff"
    label: "Queue Storage\nhttps://{account}.queue.core.windows.net/{queue}"
  }

  table: Table Storage {
    style.fill: "#a36"
    style.font-color: "#fff"
    label: "Table Storage\nhttps://{account}.table.core.windows.net/{table}"
  }
}

app: Applications & Services {
  style.fill: "#f5f5f5"
  web: Web Apps
  vm: Virtual Machines
  functions: Azure Functions
  logic: Logic Apps
}

app.web -> sa.blob: "REST / SDK / AzCopy"
app.vm -> sa.files: "SMB / NFS mount"
app.functions -> sa.queue: "Queue trigger"
app.logic -> sa.table: "REST / SDK"

Service Decision Tree

Use this decision tree to select the appropriate storage service for a given workload:


Comparison Table

FeatureBlobFilesQueueTable
ProtocolREST/HTTPSMB/NFS/RESTREST/HTTPREST/HTTP
StructureFlat (containers)Hierarchical (directories)FIFO messagesKey-value entities
Max item size190.7 TiB4 TiB (file)64 KB1 MB
Use caseMedia, backups, data lakesFile shares, lift-and-shiftAsync processingNoSQL metadata
Access methodsURL, SDK, AzCopyMount, URL, SDKSDK, RESTSDK, REST

Azure Blob Storage

Blob storage is Azure's object storage solution for unstructured data. It is the most commonly used storage service and the default when you create a storage account. Blobs are organized into containers, which act as logical groupings (similar to folders, but the namespace is flat unless Data Lake Storage Gen2 hierarchical namespace is enabled).

Blob URL Format

https://{account}.blob.core.windows.net/{container}/{blob}

Example:

https://myaccount.blob.core.windows.net/images/photo.jpg

Containers

  • A container is a logical grouping of blobs -- think of it like a top-level folder
  • The namespace inside a container is flat by default (no real subdirectories)
  • You can simulate directories using / in blob names (e.g., logs/2026/01/app.log), but these are virtual paths, not real folders
  • Enabling Data Lake Storage Gen2 (hierarchical namespace) gives you real directories with POSIX ACLs

Blob Types

Azure Blob Storage supports three distinct blob types, each optimized for different access patterns:

Blob TypeUse CaseMax SizeOperations
BlockFiles, images, video, documents190.7 TiBUpload in blocks (up to 4000 MiB each), random read
AppendLogs, audit trails195 GiBAppend only, no modification of existing blocks
PageVM disks (VHD files)8 TiBRandom read/write in 512-byte pages

Block Blobs

Block blobs are the default blob type and the most commonly used. They are composed of blocks, each up to 4000 MiB in size, and the maximum total blob size is 190.7 TiB. Block blobs are ideal for uploading large files efficiently -- you can upload blocks in parallel and then commit them as a single blob.

  • Upload files, images, videos, documents, and backups
  • SDK and AzCopy automatically handle block-level parallelism for large uploads
  • Support all access tiers (Hot, Cool, Cold, Archive)

Append Blobs

Append blobs are optimized for append operations. New blocks can only be added to the end of the blob -- you cannot modify or delete existing blocks. This makes them ideal for logging scenarios where data is written sequentially and should never be altered.

  • Perfect for application logs, diagnostic logs, and audit trails
  • Cannot modify or overwrite existing data (append-only guarantee)
  • Maximum size: 195 GiB

Page Blobs

Page blobs are optimized for random read/write operations and are organized in 512-byte pages. Azure Virtual Machine disks (both OS disks and data disks) are backed by page blobs.

  • Used internally by Azure for VHD files (managed and unmanaged disks)
  • Support random read/write access at the page level (512-byte aligned)
  • Maximum size: 8 TiB

INFO

In most day-to-day usage, you will work with block blobs. Page blobs are primarily managed by Azure itself for VM disk storage. Append blobs are a niche but important type for logging workloads where immutability of written data is required.

Screenshots

Blob properties in the Azure Portal

Azure Portal showing blob container properties, including public access level, lease state, and metadata.

Blob details view

Detailed view of a blob showing its type, size, access tier, and URL.


Hierarchical Namespace (Data Lake Storage Gen2)

John Savill

Enabling the Hierarchical Namespace (HNS) on a storage account upgrades Blob Storage into Azure Data Lake Storage Gen2. Instead of a flat namespace where / is simply part of a blob name, HNS provides real directory objects with metadata operations that complete in constant time, POSIX ACLs, and protocol support for NFS 3.0 and SFTP.

Flat vs Hierarchical Namespace

AspectFlat Namespace (Default)Hierarchical Namespace (HNS)
DirectoriesVirtual -- / is part of blob nameReal directory objects
Rename folderCopy + delete every blob (slow)Metadata change (instant)
Move folderCopy + delete every blob (slow)Pointer update (instant)
Delete folderDelete every blob individuallyRemove directory object (fast)
POSIX ACLsNot availableAvailable
NFS 3.0Not availableAvailable
SFTPNot availableAvailable
APIBlob REST API onlyDFS REST API + Blob REST API
Hadoop/Sparkwasb:// driverabfs:// driver (optimized)

Flat namespace stores all blobs as a flat list with / in the name. HNS creates a real directory tree with O(1) rename, move, and delete operations on directories.

Feature Compatibility Matrix

HNS Feature Trade-offs

Enabling HNS disables several Blob Storage features. This decision is irreversible -- you cannot disable HNS after account creation.

FeatureWithout HNSWith HNS
Blob VersioningAvailableNOT available
Blob Index TagsAvailableNOT available
Point-in-Time RestoreAvailableNOT available
Object ReplicationAvailableNOT available
Change FeedAvailableAvailable
Soft Delete (Blob)AvailableAvailable
Soft Delete (Container)AvailableAvailable
NFS 3.0Not availableAvailable
SFTPNot availableAvailable
POSIX ACLsNot availableAvailable
True DirectoriesVirtual onlyReal objects

Exam Tip

HNS cannot be changed after account creation. Know what you lose (versioning, blob index tags, point-in-time restore, object replication) and what you gain (NFS 3.0, SFTP, POSIX ACLs, real directories). Any question mentioning SFTP or NFS on blob storage requires HNS to be enabled.

Data Lake Pattern

Data Lake Storage Gen2 is designed for the ELT (Extract, Load, Transform) paradigm: ingest raw data first, store it cheaply, then transform it later. Storage is inexpensive compared to compute, so it is more cost-effective to land all data as-is and process it on demand using analytics engines like Synapse, Databricks, or HDInsight.

txt
direction: right

ingest: Ingest {
  style.fill: "#4a9"
  style.font-color: "#fff"
  label: "Ingest\n(IoT, APIs, Apps, Logs)"
}

raw: Raw Zone {
  style.fill: "#38a"
  style.font-color: "#fff"
  label: "Raw Zone\n(Store as-is, no transformation)"
}

curated: Curated Zone {
  style.fill: "#a63"
  style.font-color: "#fff"
  label: "Curated Zone\n(Clean, transform, enrich)"
}

serve: Serve {
  style.fill: "#a36"
  style.font-color: "#fff"
  label: "Serve\n(Analytics / BI / ML)"
}

ingest -> raw: "Land raw data"
raw -> curated: "ELT pipelines"
curated -> serve: "Optimized queries"

DFS API and URL Format

When HNS is enabled, the storage account exposes a DFS (Distributed File System) endpoint in addition to the standard blob endpoint:

https://{account}.dfs.core.windows.net/{filesystem}/{directory}/{file}

The standard Blob API also works for HNS-enabled accounts, but the DFS endpoint is the native interface for Data Lake operations. The ABFS driver (abfs://) is used by Hadoop, Spark, and Databricks to interact with Data Lake Storage Gen2 -- it replaces the older wasb:// driver and provides significantly better performance and reliability.

Lab: Flat vs Hierarchical Namespace Comparison

  1. Create two storage accounts -- one with HNS disabled (flat namespace) and one with HNS enabled (hierarchical namespace)
  2. In both accounts, create a container/filesystem named test-data
  3. Upload several files with path-like names (e.g., logs/2026/01/app.log, logs/2026/01/error.log)
  4. In the flat namespace account, try renaming the logs/2026/01/ "folder" -- observe that Azure must copy and delete each blob individually
  5. In the HNS account, rename the logs/2026/01/ directory -- observe the instant metadata operation
  6. In the HNS account, navigate to the DFS endpoint in the portal and explore the directory structure
  7. Compare the available features (versioning, index tags) between the two accounts

Static Website Hosting

John Savill

Azure Storage accounts can serve static web content (HTML, CSS, JavaScript, images) directly from a special blob container called $web. This is a lightweight, low-cost option for hosting single-page applications, documentation sites, landing pages, and other content that does not require server-side processing.

How It Works

  1. Enable static website hosting at the storage account level
  2. Azure automatically creates a $web container
  3. Upload your HTML, CSS, JavaScript, and image files to the $web container
  4. Content is served at: https://{account}.z{N}.web.core.windows.net
  5. Configure the index document (e.g., index.html) and error document (e.g., 404.html)

Static Website vs Azure Static Web Apps

FeatureStorage Static WebsiteAzure Static Web Apps
HostingBlob $web containerManaged service
CDNManual (add Azure CDN)Built-in global CDN
Backend APIsNoneManaged Azure Functions
Custom domainsCNAME (HTTP only natively)Built-in with free SSL
CI/CDManual uploadGitHub/DevOps integration
CostStorage charges onlyFree tier available

Custom Domain

To use a custom domain with a static website, create a CNAME record pointing to the web endpoint ({account}.z{N}.web.core.windows.net). However, the native static website endpoint only supports HTTP for custom domains. For HTTPS with a custom domain, place Azure CDN in front of the storage account -- the CDN provides SSL termination and global caching.

CLI Reference

bash
# Enable static website hosting
az storage blob service-properties update \
  --account-name myaccount \
  --static-website \
  --index-document index.html \
  --404-document 404.html

# Upload site content to $web container
az storage blob upload-batch \
  --account-name myaccount \
  --source ./site \
  --destination '$web'
powershell
# Enable static website hosting
Enable-AzStorageStaticWebsite `
  -Context $ctx `
  -IndexDocument "index.html" `
  -ErrorDocument404Path "404.html"

# Upload site content to $web container
Get-ChildItem -Path "./site" -Recurse -File | ForEach-Object {
    Set-AzStorageBlobContent `
      -Container '$web' `
      -File $_.FullName `
      -Blob $_.Name `
      -Context $ctx
}

Lab: Host a Static Website

  1. In your storage account, go to Data management > Static website
  2. Click Enabled, set the index document to index.html and the error document to 404.html
  3. Note the primary endpoint URL that Azure provides
  4. Navigate to the $web container that was automatically created
  5. Create a simple index.html file:
    html
    <!DOCTYPE html>
    <html><body><h1>Hello from Azure Static Website!</h1></body></html>
  6. Upload index.html to the $web container
  7. Create and upload a 404.html error page
  8. Open the primary endpoint URL in a browser -- verify the index page renders
  9. Navigate to a non-existent path (e.g., /does-not-exist) -- verify the 404 page renders

Azure Files

Azure Files provides fully managed file shares in the cloud, accessible via the industry-standard SMB (Server Message Block) and NFS (Network File System) protocols. Unlike Blob Storage's flat container model, Azure Files offers a true hierarchical directory structure with real folders.

Key Characteristics

PropertyDetails
ProtocolsSMB (port 445), NFS (port 2049), REST API
Directory structureHierarchical -- real folders and subfolders
MountableWindows, Linux, macOS -- mounts as a native network drive
Max share size100 TiB (standard), 100 TiB (premium)
Max file size4 TiB
EncryptionSMB 3.x encryption in transit, Azure Storage encryption at rest

Use Cases

  • Lift-and-shift: Migrate on-premises file shares to the cloud without changing application code
  • Shared application settings: Configuration files shared across multiple application instances
  • Diagnostic logs: Centralized log collection from multiple VMs or services
  • Dev/test environments: Shared tools, scripts, and test data across teams

Azure File Sync

Azure File Sync extends Azure Files to on-premises Windows Servers, enabling:

  • Cloud tiering: Frequently accessed files stay local; infrequently accessed files are tiered to Azure and recalled on demand
  • Multi-site sync: Synchronize files across multiple on-premises servers and Azure
  • Centralized backup: All data is backed up in Azure regardless of which server it lives on
  • Fast disaster recovery: Provision a new server and sync from the cloud

TIP

Azure File Sync is a powerful hybrid solution. Think of it as a CDN for file shares -- hot files are cached locally for low-latency access, while the full dataset lives in Azure. This is a common exam topic when questions mention "extending on-premises file servers to the cloud."

File Share URL Format

https://{account}.file.core.windows.net/{share}/{directory}/{file}

Azure Queue Storage

Azure Queue Storage provides simple, reliable message queuing for asynchronous communication between application components. It is designed for decoupling -- a producer pushes messages to the queue, and a consumer processes them independently.

Key Characteristics

PropertyDetails
ProtocolREST/HTTP
Max message size64 KB
Queue capacityMillions of messages (limited by storage account capacity)
Default message TTL7 days (configurable up to infinite)
Visibility timeoutConfigurable -- hides a message from other consumers while one consumer processes it
OrderingApproximate FIFO (not guaranteed strict ordering)

Use Cases

  • Decoupling application components: Frontend submits orders; backend processes them asynchronously
  • Work item scheduling: Background jobs triggered by queue messages
  • Order processing: E-commerce order pipeline with retry and dead-letter capability
  • Load leveling: Smooth out traffic spikes by queuing requests during peak load

Queue URL Format

https://{account}.queue.core.windows.net/{queue}

Queue vs Service Bus

When to Use Azure Service Bus Instead

Azure Queue Storage is simple and cost-effective, but for advanced messaging scenarios, consider Azure Service Bus:

FeatureQueue StorageService Bus
Max message size64 KB256 KB (Standard), 100 MB (Premium)
OrderingApproximate FIFOGuaranteed FIFO (sessions)
Topics/SubscriptionsNoYes (pub/sub pattern)
TransactionsNoYes
Duplicate detectionNoYes
Dead-letter queueNo (manual)Built-in
SessionsNoYes (grouped processing)
CostVery lowHigher

Rule of thumb: Use Queue Storage for simple, high-volume queuing. Use Service Bus when you need guaranteed ordering, pub/sub topics, transactions, or dead-letter handling.


Azure Table Storage

Azure Table Storage is a NoSQL key-value store for structured, non-relational data. It is schema-less, meaning each entity (row) in a table can have a completely different set of properties (columns). The only required properties are PartitionKey, RowKey, and Timestamp.

Key Characteristics

PropertyDetails
ProtocolREST/HTTP, OData
Key structurePartitionKey + RowKey = unique identifier
SchemaSchema-less -- entities can have different properties
Max entity size1 MB
Max properties per entity252 (plus 3 system properties)
ScalabilityAutomatic partitioning based on PartitionKey

Use Cases

  • Structured non-relational data: Data that does not require joins, foreign keys, or complex queries
  • Address books and user profiles: Flexible schema for varying user attributes
  • Device information: IoT device metadata with heterogeneous properties
  • Application metadata: Configuration, feature flags, and settings storage

Table URL Format

https://{account}.table.core.windows.net/{table}

Table Storage vs Cosmos DB Table API

When to Use Cosmos DB Table API Instead

Azure Table Storage is simple and inexpensive, but Cosmos DB Table API provides significant advantages for production workloads:

FeatureTable StorageCosmos DB Table API
LatencyVariableSingle-digit ms (guaranteed SLA)
Throughput~20,000 ops/sec per partitionUnlimited (configurable RU/s)
Global distributionSingle regionMulti-region with automatic failover
IndexingPrimary index only (PartitionKey + RowKey)Automatic secondary indexing on all properties
SLA99.9%99.999% (multi-region)
CostLowerHigher (pay for provisioned throughput)

Rule of thumb: Use Table Storage for development, low-throughput scenarios, and cost-sensitive workloads. Migrate to Cosmos DB Table API when you need global distribution, guaranteed low latency, or higher throughput. The SDK is compatible -- migration requires minimal code changes.


CLI and PowerShell Reference

Blob Storage Operations

bash
# Create a blob container
az storage container create \
  --name mycontainer \
  --account-name myaccount

# Upload a blob
az storage blob upload \
  --account-name myaccount \
  --container-name mycontainer \
  --name myfile.txt \
  --file ./myfile.txt

# List blobs in a container
az storage blob list \
  --account-name myaccount \
  --container-name mycontainer \
  --output table

# Download a blob
az storage blob download \
  --account-name myaccount \
  --container-name mycontainer \
  --name myfile.txt \
  --file ./downloaded.txt

# Delete a blob
az storage blob delete \
  --account-name myaccount \
  --container-name mycontainer \
  --name myfile.txt
powershell
# Get storage context
$ctx = New-AzStorageContext `
  -StorageAccountName "myaccount" `
  -StorageAccountKey (Get-AzStorageAccountKey `
    -ResourceGroupName "myRG" `
    -Name "myaccount")[0].Value

# Create a blob container
New-AzStorageContainer -Name "mycontainer" -Context $ctx

# Upload a blob
Set-AzStorageBlobContent `
  -Container "mycontainer" `
  -File "./myfile.txt" `
  -Blob "myfile.txt" `
  -Context $ctx

# List blobs in a container
Get-AzStorageBlob -Container "mycontainer" -Context $ctx

# Download a blob
Get-AzStorageBlobContent `
  -Container "mycontainer" `
  -Blob "myfile.txt" `
  -Destination "./downloaded.txt" `
  -Context $ctx

# Delete a blob
Remove-AzStorageBlob `
  -Container "mycontainer" `
  -Blob "myfile.txt" `
  -Context $ctx

Azure Files Operations

bash
# Create a file share
az storage share create \
  --name myshare \
  --account-name myaccount \
  --quota 100

# Create a directory in the share
az storage directory create \
  --name mydir \
  --share-name myshare \
  --account-name myaccount

# Upload a file to the share
az storage file upload \
  --share-name myshare \
  --source ./myfile.txt \
  --path mydir/myfile.txt \
  --account-name myaccount

# List files in a share
az storage file list \
  --share-name myshare \
  --account-name myaccount \
  --output table
powershell
$ctx = New-AzStorageContext `
  -StorageAccountName "myaccount" `
  -StorageAccountKey "<key>"

# Create a file share
New-AzStorageShare -Name "myshare" -Context $ctx

# Create a directory in the share
New-AzStorageDirectory `
  -ShareName "myshare" `
  -Path "mydir" `
  -Context $ctx

# Upload a file to the share
Set-AzStorageFileContent `
  -ShareName "myshare" `
  -Source "./myfile.txt" `
  -Path "mydir/myfile.txt" `
  -Context $ctx

# List files in a share
Get-AzStorageFile -ShareName "myshare" -Context $ctx

Queue Storage Operations

bash
# Create a queue
az storage queue create \
  --name myqueue \
  --account-name myaccount

# Send a message to the queue
az storage message put \
  --queue-name myqueue \
  --account-name myaccount \
  --content "Hello World"

# Peek at messages (does not dequeue)
az storage message peek \
  --queue-name myqueue \
  --account-name myaccount \
  --num-messages 5

# Get (dequeue) a message
az storage message get \
  --queue-name myqueue \
  --account-name myaccount

# Delete a queue
az storage queue delete \
  --name myqueue \
  --account-name myaccount
powershell
$ctx = New-AzStorageContext `
  -StorageAccountName "myaccount" `
  -StorageAccountKey "<key>"

# Create a queue
New-AzStorageQueue -Name "myqueue" -Context $ctx

# Get the queue reference
$queue = Get-AzStorageQueue -Name "myqueue" -Context $ctx

# Send a message
$queueMessage = [Microsoft.Azure.Storage.Queue.CloudQueueMessage]::new("Hello World")
$queue.CloudQueue.AddMessageAsync($queueMessage)

# Peek at messages (does not dequeue)
$queue.CloudQueue.PeekMessagesAsync(5)

# Dequeue a message
$message = $queue.CloudQueue.GetMessageAsync().Result
$queue.CloudQueue.DeleteMessageAsync($message)

Table Storage Operations

bash
# Create a table
az storage table create \
  --name mytable \
  --account-name myaccount

# Insert an entity
az storage entity insert \
  --table-name mytable \
  --account-name myaccount \
  --entity PartitionKey=Users RowKey=user001 Name="Alice" Email="alice@example.com"

# Query entities by partition key
az storage entity query \
  --table-name mytable \
  --account-name myaccount \
  --filter "PartitionKey eq 'Users'"

# Delete an entity
az storage entity delete \
  --table-name mytable \
  --account-name myaccount \
  --partition-key Users \
  --row-key user001

# Delete a table
az storage table delete \
  --name mytable \
  --account-name myaccount
powershell
$ctx = New-AzStorageContext `
  -StorageAccountName "myaccount" `
  -StorageAccountKey "<key>"

# Create a table
New-AzStorageTable -Name "mytable" -Context $ctx

# Get table reference
$table = (Get-AzStorageTable -Name "mytable" -Context $ctx).CloudTable

# Insert an entity
$entity = New-Object Microsoft.Azure.Cosmos.Table.DynamicTableEntity("Users", "user001")
$entity.Properties.Add("Name", "Alice")
$entity.Properties.Add("Email", "alice@example.com")
$table.ExecuteAsync(
  [Microsoft.Azure.Cosmos.Table.TableOperation]::InsertOrReplace($entity)
)

# Query entities by partition key
$filter = "PartitionKey eq 'Users'"
$query = New-Object Microsoft.Azure.Cosmos.Table.TableQuery
$query.FilterString = $filter
$table.ExecuteQuerySegmentedAsync($query, $null).Result.Results

# Delete an entity
$entityToDelete = $table.ExecuteAsync(
  [Microsoft.Azure.Cosmos.Table.TableOperation]::Retrieve("Users", "user001")
).Result.Result
$table.ExecuteAsync(
  [Microsoft.Azure.Cosmos.Table.TableOperation]::Delete($entityToDelete)
)

Lab Exercises

Prerequisites

Ensure you have an active Azure subscription, a storage account created (from earlier labs), and the Azure CLI installed locally or access to Azure Cloud Shell.

Lab 1: Create a Blob Container and Upload Different File Types

  1. Navigate to your storage account in the Azure Portal
  2. Go to Data storage > Containers
  3. Click + Container, name it lab-blobs, set access level to Private
  4. Open the container and upload three different files:
    • A text file (.txt)
    • An image file (.png or .jpg)
    • A PDF document (.pdf)
  5. Click on each blob and observe:
    • The blob type (should be Block blob for all three)
    • The blob URL, access tier, size, and content type
  6. Copy the URL of the image blob and try opening it in a browser -- note the 403 error because the container is private

Lab 2: Create a File Share and Add Directories and Files

  1. In your storage account, go to Data storage > File shares
  2. Click + File share, name it lab-share, set quota to 5 GiB
  3. Open the share and create two directories: config and logs
  4. Navigate into the config directory and upload a sample configuration file
  5. Navigate into the logs directory and upload a sample log file
  6. Note the hierarchical structure -- these are real directories, unlike blob virtual paths
  7. Click Connect on the file share and review the mount commands for Windows, Linux, and macOS

Lab 3: Create a Queue, Send Messages, Peek and Dequeue

  1. In your storage account, go to Data storage > Queues
  2. Click + Queue, name it lab-queue
  3. Open the queue and click Add message -- add three messages:
    • Message 1: {"orderId": "001", "item": "Widget"}
    • Message 2: {"orderId": "002", "item": "Gadget"}
    • Message 3: {"orderId": "003", "item": "Gizmo"}
  4. Observe the messages in the queue -- note they appear in order
  5. Click Peek on the first message (this does NOT remove it from the queue)
  6. Click Dequeue on the first message (this removes it from the queue)
  7. Verify only two messages remain

Lab 4: Create a Table, Insert Entities, and Query by Partition Key

  1. In your storage account, go to Data storage > Tables
  2. Click + Table, name it labtable
  3. Open the table and click Add entity
  4. Insert three entities with PartitionKey = Employees:
    • RowKey: emp001, Name: Alice, Department: Engineering
    • RowKey: emp002, Name: Bob, Department: Marketing
    • RowKey: emp003, Name: Carol, Department: Engineering, Title: Senior Engineer
  5. Note that emp003 has an extra property (Title) that the others do not -- this demonstrates the schema-less nature of Table Storage
  6. Use the query builder to filter by PartitionKey eq 'Employees' and verify all three entities are returned
  7. Add a filter for Department eq 'Engineering' and verify only Alice and Carol are returned

Lab 5: Compare the Portal Experience for Each Service Type

  1. Open each of the four services you created (Containers, File shares, Queues, Tables) side by side
  2. Compare the portal experience:
    • Blobs: Flat list with virtual path navigation, access tier shown per blob
    • Files: True folder hierarchy with breadcrumb navigation, mount button available
    • Queues: Message list with peek/dequeue actions, message metadata displayed
    • Tables: Entity grid with query builder, schema varies per entity
  3. Document the key differences in navigation, available actions, and metadata displayed

Clean Up

After completing the labs, delete the test resources to avoid ongoing charges:

bash
az storage container delete --name lab-blobs --account-name <your-account>
az storage share delete --name lab-share --account-name <your-account>
az storage queue delete --name lab-queue --account-name <your-account>
az storage table delete --name labtable --account-name <your-account>

MS Learn References


Key Takeaways

Exam Tip -- Summary

  1. Blob Storage is for unstructured data (files, images, videos). Block blobs are the default type. Append blobs are for logs. Page blobs are for VM disks.
  2. Azure Files provides SMB/NFS file shares with true hierarchical directories -- the go-to answer for lift-and-shift file server migrations.
  3. Queue Storage decouples application components with simple 64 KB messages. For advanced messaging (topics, sessions, transactions), use Azure Service Bus instead.
  4. Table Storage is a NoSQL key-value store identified by PartitionKey + RowKey. For global distribution and guaranteed latency, use Cosmos DB Table API instead.
  5. Each service has a distinct URL pattern under the storage account: .blob., .file., .queue., .table. followed by core.windows.net.
  6. Know the max sizes: Block blob = 190.7 TiB, Page blob = 8 TiB, File = 4 TiB, Queue message = 64 KB, Table entity = 1 MB.

Released under the MIT License.