All posts
Engineering

S3 Bucket vs Azure Blob Storage Compared Honestly

Dustin16 min read

Every "S3 vs Blob Storage" article on the internet reads like it was written by someone terrified of upsetting either Amazon or Microsoft. You get a feature table, some vague language about how "the best choice depends on your needs," and zero actual guidance.

I've used both. I've paid real bills on both. And I have opinions.

Here's the comparison I wish someone had written when I was making this decision.

What's actually different between S3 and Azure Blob Storage?

S3 (Simple Storage Service) is AWS's object storage. Azure Blob Storage is Microsoft's. They both store files in the cloud. They both scale to petabytes. They both cost fractions of a cent per gigabyte per month. They both have 11+ nines of durability, meaning your data isn't going anywhere.

The differences that matter are less about storage itself and more about everything around it: how you access the data, what ecosystem you're locked into, how pricing actually works at scale, and which operational gotchas will bite you at 2am. (Microsoft publishes their own side-by-side with AWS if you want the official mapping, but it reads like documentation because it is.)

AWS S3Azure Blob Storage
Durability99.999999999% (11 nines)Up to 99.9999999999999999% (16 nines with GRS)
Max object size5 TB4.75 TB (block blob), 190.7 TB (append blob)
Storage tiersStandard, Intelligent-Tiering, Glacier Instant/Flexible/Deep Archive, Express One ZoneHot, Cool, Cold, Archive, Premium Block Blobs
API standardDe facto industry standard (S3 API)Proprietary REST API
Request rate limits5,500/sec per prefix (effectively unlimited)20,000/sec per storage account
Best forMulti-cloud, high request volume, broad tooling supportMicrosoft shops, premium latency workloads, cost-sensitive hot storage

That table covers the specs. The rest of this article covers what the specs don't tell you.

Pricing: the real math, not the marketing page

This is where most comparisons fall apart. They quote list prices for storage per GB, say "Azure is a bit cheaper," and move on. The actual bill is more complicated.

Let's work through a real scenario based on S3 pricing and Azure Blob pricing as of early 2026: 50 TB of data, mixed access patterns. Roughly 40 TB sits cold (accessed maybe once a month), 10 TB is hot (read thousands of times per day). You're running 5 million read requests and 500,000 write requests monthly. And you're pulling 2 TB of data out to the internet each month.

Storage costs

For the hot 10 TB:

  • S3 Standard: ~$230/month
  • Azure Blob Hot: ~$180/month

Azure wins here by about 20%. That's consistent and real.

For the cold 40 TB:

  • S3 Glacier Instant Retrieval: ~$160/month
  • Azure Cool: ~$360/month
  • Azure Cold (if you can tolerate 180-day minimum): ~$180/month

S3 Glacier Instant Retrieval is genuinely cheap for data you rarely touch but need fast when you do. Azure's Cool tier isn't a direct equivalent, and their Cold tier has a long minimum storage commitment.

Request costs

Here's where it gets sneaky. Both platforms charge per API request, and the rates differ between read and write operations.

  • S3: GET requests at $0.0004 per 1,000. PUT at $0.005 per 1,000.
  • Azure Blob Hot: Read operations at $0.0004 per 10,000. Write at $0.005 per 10,000.

Wait. Azure charges per 10,000 operations and S3 charges per 1,000? Yes. This means Azure's per-request cost is roughly 10x cheaper for the same operation count. At 5 million reads monthly, the difference is small in absolute terms ($2 vs $0.20). At 5 billion reads, it's a real line item.

Egress: the quiet budget killer

Both platforms charge you to move data out to the internet. This is the cost nobody thinks about until the bill arrives.

  • S3: first 100 GB free, then $0.09/GB, dropping at higher tiers
  • Azure Blob: first 100 GB free (with some plans), then $0.087/GB

For 2 TB of monthly egress, you're looking at roughly $170-180 on either platform. The rates are close enough that egress shouldn't be your deciding factor. But it should be a factor in your architecture. If you're pulling data out of cloud storage and processing it somewhere else (another cloud, your office, a CDN), egress can dwarf storage costs.

Early deletion penalties

Both platforms penalize you for moving data into a cold tier and then accessing it before the minimum retention period. S3 Glacier Flexible has a 90-day minimum. Azure Archive has a 180-day minimum. Delete or move the data early and you pay for the full period anyway.

I've seen teams move data into Archive tiers with lifecycle rules, then realize they need it two weeks later. The retrieval fees plus early deletion charges wiped out months of storage savings.

Bottom line on cost

Azure Blob is cheaper for straightforward hot storage. S3 is cheaper for cold and archival workloads, and S3 Intelligent-Tiering is genuinely useful if your access patterns are unpredictable. AWS claims Intelligent-Tiering has saved customers over $4 billion collectively. Even if that number is inflated, the feature works well in practice.

If you're spending less than $1,000/month on storage, pick whichever cloud you're already using. The savings from switching won't cover the engineering time.

Performance: it depends on what you're doing

The "who's faster" question doesn't have a clean answer because they're fast at different things.

Latency. S3 Standard delivers millisecond first-byte latency. Azure Blob Hot runs 10-20ms. For most applications, you won't notice. But if you're serving assets directly to users or feeding data into ML pipelines, it adds up. S3 Express One Zone brings that down to single-digit milliseconds. Azure Premium Block Blobs (backed by NVMe SSDs) go sub-millisecond. Both options cost more and trade durability or availability for speed.

Throughput. Both max out around 60 Gbps egress. Both support parallel uploads. Azure's hierarchical namespace (Data Lake Storage Gen2) gives you filesystem-like performance for analytics workloads. S3 doesn't have a direct equivalent, though most analytics tools work fine with S3's flat namespace.

Request rates. This is where S3 has a structural advantage. S3's limit is 5,500 requests/second per prefix, and you can have unlimited prefixes. In practice, this means you can partition your keys to achieve effectively unlimited request rates. Azure Blob's limit is 20,000 requests/second per storage account. You can create multiple accounts to get around this, but it's more operational overhead.

If your workload involves millions of small requests per second (like serving image thumbnails or log ingestion), S3's per-prefix model gives you more room before you have to think about sharding.

The portability problem nobody talks about

Here's the thing that should matter more than it does in most comparisons: the S3 API is the industry standard for object storage. Not because it's technically superior, but because everyone copied it.

MinIO, Cloudflare R2, Backblaze B2, Google Cloud Storage (in interop mode), DigitalOcean Spaces, Wasabi — they all speak S3. If you build your application against the S3 API, you can swap storage providers without touching your code. I've migrated workloads from S3 to R2 in an afternoon by changing an endpoint URL and credentials. Tools like rclone make this even easier since they speak both protocols.

Azure Blob Storage has its own REST API. It's fine. It works. But it means your code, SDKs, and tooling are Azure-specific. If you ever want to leave Azure or add a second cloud, you're either rewriting your storage layer or running a compatibility gateway.

Microsoft knows this. There are third-party S3-to-Blob adapters and Azure provides some S3 API mapping through gateway configurations. But "adapter" and "gateway" are fancy words for "extra moving parts that can break." In production, I'd rather not have that layer.

If you're committed to Azure for the next five years and your team lives in the Microsoft ecosystem, this doesn't matter. If there's any chance you'll go multi-cloud, it matters a lot.

Storage tiers side by side

Both platforms use tiered pricing where less-accessed data costs less. The tiers don't map 1:1, which makes comparison annoying.

Use caseAWS S3 tierAzure Blob tier
Frequently accessed dataS3 StandardHot
Infrequently accessed (monthly)S3 Standard-IACool
Rarely accessed (quarterly)S3 Glacier Instant RetrievalCold
Long-term archiveS3 Glacier Flexible / Deep ArchiveArchive
Unpredictable accessS3 Intelligent-TieringNo direct equivalent
Low-latency performance tierS3 Express One ZonePremium Block Blobs

A few things worth noting:

S3 Intelligent-Tiering automatically moves objects between tiers based on access patterns. You pay a small monitoring fee per object, but you never pay retrieval charges. Azure doesn't have a true equivalent. You set lifecycle rules that move data on a schedule, but it's not adaptive to actual access.

Azure's Cold tier is relatively new and sits between Cool and Archive. S3 Glacier Instant Retrieval occupies a similar spot but with faster access. If you need data available within milliseconds but rarely read it, S3 Glacier Instant is hard to beat on price.

Premium Block Blobs on Azure offer sub-millisecond latency on NVMe storage. S3 Express One Zone is AWS's answer, launched in late 2023. Both are aimed at AI/ML workloads where you need training data accessible at wire speed. Azure has been more aggressive here in 2025-2026, with scaled storage accounts designed for frontier model training (the kind of thing OpenAI and NVIDIA DGX clusters use).

Security: two different philosophies

Both meet every compliance standard you'd want (FedRAMP, HIPAA, SOC 2, you name it). Both encrypt at rest and in transit. Both support immutable storage for regulatory requirements like SEC Rule 17a-4 and FINRA. The differences are in how you manage access.

S3's model revolves around IAM policies and bucket policies. You write JSON policy documents that specify who can do what to which resources. It's flexible, it's granular, and it's the source of approximately half of all cloud security incidents. Misconfigured S3 bucket policies have leaked data from Capital One, the Pentagon, and countless others. AWS has added guardrails over the years (Block Public Access, Access Analyzer), but the underlying model still gives you enough rope to hang yourself.

Azure's model uses a combination of Entra ID (formerly Azure AD), role-based access control, and SAS tokens. RBAC is generally easier to reason about than policy documents, and managed identities mean you can authenticate services without passing keys around. SAS tokens let you generate time-limited URLs with specific permissions, which is cleaner than S3's presigned URLs in some ways. Azure also enforces TLS 1.2 as a minimum and makes it easy to disable all public access at the account level.

In practice: Azure's default security posture is slightly more locked-down out of the box. S3 gives you more fine-grained control but demands more care. If your team has strong AWS IAM experience, S3 is manageable. If your org already uses Entra ID for identity, Azure's integration is seamless.

Both platforms support WORM (Write Once Read Many) immutability. S3 calls it Object Lock. Azure uses immutability policies and legal holds. If you're in a regulated industry, both work.

A real recommendation: regardless of which you pick, use managed identities (Azure) or IAM roles (AWS) instead of access keys. Rotate any keys you do use every 90 days. Turn on access logging from day one. Azure's Well-Architected Framework guide for Blob Storage covers this in detail. The number of teams I've seen discover a misconfigured public bucket six months after the fact is not small.

Event-driven processing

If you're building pipelines that react to storage changes, both platforms have answers.

S3 fires event notifications to SNS, SQS, Lambda, or EventBridge when objects are created, deleted, or modified. It's flexible and well-documented. S3 Object Lambda lets you transform data on the fly during retrieval, which is useful for redacting PII or converting formats without storing multiple copies.

Azure Blob triggers Azure Functions on blob creation or update. Blob change feed gives you an append-only log of all changes, which is useful for audit trails and incremental processing. It's less flexible than S3's event routing but simpler to set up.

For data lake use cases, Azure's Data Lake Storage Gen2 (which is Blob Storage with a hierarchical namespace bolted on) gives you filesystem semantics, which tools like Spark and Databricks prefer. S3 works fine with these tools too, but the flat namespace means you're relying on key naming conventions instead of real directories.

The gotchas nobody mentions

Both platforms have operational surprises that don't show up in feature tables.

S3's eventual consistency is gone (but the myth isn't). Since December 2020, S3 is strongly consistent for all operations. You write an object, the next read returns the new data. Before 2020, this wasn't the case, and a lot of old code and documentation still account for it. If someone tells you S3 has eventual consistency, they're working off outdated information.

Azure's account-level throttling. Azure Blob limits requests per storage account, not per container or blob. Hit 20,000 requests/second on one account and everything in that account slows down. You can work around this with multiple storage accounts, but that adds management complexity. S3's per-prefix limit means one hot key path doesn't affect another.

Lifecycle rule gotchas. On both platforms, lifecycle rules that transition objects to colder tiers can trigger early deletion fees if objects are overwritten or deleted before the minimum retention period. I've seen lifecycle rules and application-level deletes fight each other, resulting in unexpected charges. Test your lifecycle rules against your actual data patterns before enabling them on production.

Azure's account naming. Storage account names in Azure must be globally unique, 3-24 characters, lowercase alphanumeric only. This sounds minor until your naming convention hits the character limit or your desired name is taken worldwide. S3 bucket names have similar global uniqueness requirements but allow hyphens and up to 63 characters.

Multi-part upload differences. S3 requires multi-part upload for objects over 5 GB and supports it for anything 5 MB and up. Azure's block blob upload works differently: you upload blocks independently and then commit them. Functionally similar, but the SDKs handle it differently, and if you're writing low-level code against the APIs, the Azure pattern takes more thought.

Versioning and soft delete. Both support object versioning. Both have soft delete. S3's versioning keeps every version of every object until you explicitly delete it or set a lifecycle rule. This can quietly eat storage budget if you have high churn. Azure's soft delete retains deleted blobs for a configurable period. Enable both from day one. The storage cost is minimal compared to the cost of recovering from an accidental deletion.

When to pick S3

  • Your infrastructure already runs on AWS.
  • You need S3 API compatibility for portability or because your tools expect it (which covers most data tools, backup solutions, and CDNs).
  • Your workload has high request rates and you don't want to manage multiple storage accounts to scale.
  • You want Intelligent-Tiering for data with unpredictable access patterns.
  • You're using Glacier for long-term archival and the price-to-retrieval-speed ratio works for you.
  • You want to keep the option of migrating to R2, Backblaze, or another S3-compatible provider someday.

When to pick Azure Blob Storage

  • Your organization runs on Azure and uses Entra ID for identity.
  • You're building on Microsoft's data stack (Azure ML, Synapse, Databricks on Azure, Power BI).
  • You need Data Lake Storage Gen2's hierarchical namespace for analytics.
  • Your hot storage costs are the majority of your bill and the 15-20% savings matter.
  • You need Premium Block Blobs for sub-millisecond latency (AI training, financial data, real-time media).
  • You're in an enterprise that already has Azure Enterprise Agreements with committed spend.

When it genuinely doesn't matter

If you're storing less than 1 TB, running fewer than a million requests per month, and not building infrastructure that needs to last more than a year, pick whichever cloud your team already knows. The differences at small scale are measured in pennies. The time spent agonizing over the choice costs more than the wrong answer.

Same goes for static website hosting, basic backup storage, or any workload where you could migrate to the other platform in a day if you needed to. Don't overthink it.

What's new in 2025-2026

Both platforms shipped meaningful updates recently.

Azure has been pushing hard on AI-scale storage. Blob scaled accounts are designed for frontier model training and inference workloads at the scale NVIDIA DGX clusters and OpenAI operate at. ACStor brings native blob storage to Kubernetes. Elastic SAN with ZRS support targets SAP and high-availability workloads. Azure's February 2026 updates include GA disk storage improvements and deeper integrations with Azure AI Foundry and Ray.

S3 has continued improving Express One Zone for low-latency AI workloads and rolled out enhancements to Intelligent-Tiering monitoring. S3 Object Lambda keeps getting new use cases. The core product hasn't changed dramatically because it was already mature; AWS has focused more on price reductions and ecosystem integrations.

If AI/ML training data is a primary use case, Azure has been more visibly investing here. If you're running something like a self-hosted AI agent that needs fast access to model weights or vector data, the premium tiers on either platform are worth benchmarking against your specific workload. If you need stable, boring, well-understood object storage that works with everything, S3 hasn't given you a reason to switch.

The decision

Most teams overthink this. The answer is usually obvious once you ask three questions:

  1. Which cloud does your organization already use? If you're an AWS shop, use S3. If you're an Azure shop, use Blob Storage. The integration benefits and avoiding cross-cloud egress fees outweigh any pricing difference.

  2. Do you need multi-cloud flexibility? If yes, or if you're not sure, lean toward S3. The API compatibility gives you options. Building on Azure Blob's API is a one-way door.

  3. What's your workload? Hot storage with predictable access? Azure saves you money. Cold archival? S3 Glacier is hard to beat. AI training data? Evaluate both premium tiers against your specific throughput needs.

The honest answer isn't that one is better. The honest answer is that switching costs are high, ecosystem lock-in is real, and you should pick the one that fits where you already are rather than where a comparison table says you should be.

Try it free

Ready to text your AI?

No app to download. No account to create. Just enter your phone number and start texting your AI assistant in seconds. 10 free messages to start.

By signing up, you confirm your consent to receive text messages from AIChatSMS and you agree to our terms of use.

US & Canada only. Standard message rates apply.

Keep reading