How to Alias a Domain Name or Sub Domain to Amazon S3

A few months ago, I noticed I was approaching my bandwidth limits on my hosting account. Switching hosting providers is a pain, so I decided to move some high-bandwidth graphics to Amazon S3, where the bandwidth is cheap and unlimited. All was well until I realized that Google was returning search results pointing to my bucket on s3.amazonaws.com instead of carltonbale.com. Luckily, AmazonAWS has a work-around. You can use your own domain name in an Amazon S3 bucket. Here are the instructions on how to do it, from beginning to end.

Introductory Steps for new Amazon S3 Users:

  1. First of all, obviously, you need your own domain name and your own Amazon S3 account
  2. Secondly, you need a way to create/manage Amazon S3 buckets, so you’ll need to install a client on your PC.
    • I recommend using Bucket Explorer, which is a full-featured and easy-to-use client that runs on Windows and Linux; (a Mac version is in private beta and should be available Oct 2007). A free, less-featured alternative is the S3 Organizer add-on for the Mozilla Firefox web browser.
    • Install your application of choice and either:
      • Open Bucket Explorer -or-
      • Open Firefox and go to Tools menu -> S3 organizer, and click the Manage Accounts button
    • Enter your AmazonAWS Access Key and Secret Key
      • These are available by going to http://aws.amazon.com, mousing-over the “Your Web Services Account” in the upper right-hand corner, and selecting “AWS Access Identifiers
      • AmazonAWS Your Web Services Account

How to Alias your Subdomain to an Amazon S3 Bucket:

  1. Identify the exact domain name you want to forward to Amazon S3. S3 is not a web server, so I would not recommend forwarding your entire domain there, but rather a sub-domain. The sub-domain I’m going to use is the actual one I setup: s3.carltonbale.com
  2. Create a new “bucket” (a.k.a. folder) by clicking the “create folder/bucket” icon. Name the bucket exactly what your sub-domain name is.
    • Example bucket name: s3.carltonbale.com
    • Note: you must use a unique bucket name; you won’t be able to create bucket if the name is already being used by someone else.
  3. Now comes the tricky part: modifying your DNS server settings. The procedures on how to do this vary by host and software system, but are the general steps:
    • Logon to your web host control panel and select “Manage DNS Server Settings” or similar
    • Create a new CNAME entry for your domain. For my example of s3.carltonbale.com, the entry was:
      • Name: s3
      • Type: CNAME
      • Value: s3.amazonaws.com.
      • (If you are an European users, use s3-external-3.amazonaws.com. instead)
    • And yes, the dot at the end of “s3.amazonaws.com.” is correct, at least for me. Look at your other entries to figure out what your should enter.
  4. Now comes the hardest part: waiting. It took about 2 hours for my subdomain to be recognized by AmazonAWS.
  5. Open the subdomain name in your browser. You should now be able to access your files through any of 3 urls:
    1. subdomain.domain.com (as long as the bucket name is the same as the full subdomain name, it is not necessary to specify the bucket name again at the end of the url)
    2. your_bucket_name.s3.amazonaws.com
    3. s3.amazonaws.com/your_bucket_name

Final Steps

  1. You’ll need to set permissions on your bucket and the files within using your favorite bucket management tool. I recommend setting the bucket permission to “full control by owner” only and setting the permissions of the files within the bucket to “full control by owner, read access for everyone”. This will prevent people from being able to browse/list the files in your bucket.
  2. If you don’t want Google (or Google Images) to index the files in your subdomain, create a file named robots.txt containing the following and copy it into your bucket:

User-agent: *
Disallow: /

That’s it, my start-to-finish guide on how to use your own domain name with Amazon S3. If I missed something or if something isn’t clear, let me know in the comments and I’ll fix it.

Written by in: Web | Tags: , , , , , | Last updated on: 2014-May-27 |

71 Comments »

  • Trevor says:

    Great tutorial. Thanks! :)

    The Amazon docs didn’t go into detail so I was left wondering how to do it. This helped out a lot.

    One thing: “s3.amaonaws.com.”, it should be “s3.amazonaws.com.” It almost tripped me up when I copied and pasted.

  • Carlton Bale says:

    Glad you found it useful and thanks for catching the typo — it’s fixed now!

  • Matthias says:

    Step 5.2 is incorrect. It’s your_bucket_name.s3.amazonaws.com. Insert “s3.”.

    Thanks
    Matthias

  • Carlton Bale says:

    Matthias: Thanks for the correction; the post has been updated.

  • Yuri says:

    Thank you very much for posting this article, it’s a VERY important thing, and still after reading AWS articles & docs I had no clue whether it’s possible or not.

    I’m really excited about opportunities given by the S3, but it was quite upsetting to think that one couldn’t use own dns there.

    It’s one thing if you start a project that uses Amazon’s brand (by mentioning AWS in its description and URLs of the files) to gain some credibility – but for many serious projects that have their own brands etc it’s better not to confuse users with links to Amazon.

    And the the Google problem that’s mentioned by you is a very important issue as well.

    So, again, thank you very much for this article!

    • gurpreet singh gill says:

      actually my problem is this i want to create a subdomain how couls i create in ec2 amazon.com actually i want to host my 3 projects and i searched alot but no success.. now could u plz tel the steps how to do.

      and here my domian name is:
      http://ec2-122-248-220-105.ap-southeast-1.compute.amazonaws.com

      now to create subdomian so that i can host my multiple projects.
      thanx in advance..

      • Ashok Srinath says:

        @gurpreet singh gill, AWS doesn’t provide general DNS services. You will need access to a DNS server in order to do this, so that you can add more A or CNAME records. The easiest way is to buy your own domain name, and then add all these records through the service provider.

  • randulo says:

    Thanks a million for writing this, it was exactly what I needed!

  • Mierco says:

    Thanks for publishing your tutorial. Worked just perfectly for the last few weeks.
    Actually it still works but not for european buckets anymore.

    If you access an EU bucket over the subdomain http://server.domain.com/ you’ll instantly forwarded to:
    http://server.domain.com.s3-external-3.amazonaws.com/
    I noticed it today, so I guess Amazon changed some settings not more than a week ago.

    I hope this is interesting for you and I’m looking forward for a reply :-)

  • Ola says:

    I have the same problem with a european bucket. Otherwise i guess it was a great guide ;)

  • Meister says:

    With European bucket, you should create the CNAME to “s3-external-3.amazonaws.com”

  • Carlton Bale says:

    Meister: Thanks for the tip.

  • Sander says:

    Thanks for the clear description on how to get this to work. It works great!

  • Barkeeper says:

    I’ve tried a while before I found out in your tutorial, that the bucket name must match the subdomain name – thx a lot for the description!

  • Virtual1 says:

    Thanks for the great description of exactly how to handle the CNAME. The Amazon doc is not too clear :-)

    I have this somewhat setup now with a couple of mp3 files and it goes right to the file and plays in Quicktime through FF. However, in IE, I get a Certificate Error.

  • Ross McMillan says:

    AWS documentation mentions that https sites will give a certificate mismatch error.

  • Jon says:

    Hey thanks for this, it’s exactly what I was looking for and worked perfectly.

  • Khal says:

    Hi

    I tried doing this. Took a few attempts .. but got it working.. both this post and this post

    http://www.wrichards.com/blog/2009/02/customize-your-amazon-s3-url/comment-page-1/#comment-135

    were never clear on things.. let me give it a shot for people who get stuck..

    You want your url like

    http://superman.site.com/

    Or

    http://coolservername.site.com/

    agreed?

    Good..

    1) Create your subdomain on your domain Cpanel folder.

    For this example, I own test.com and the subdomain I created is batman.test.com

    2) Open Cloud Explorer (free amazon s3 kit) and create your bucket.. this is the important part..

    your bucket name must be your entire subdomain name

    So create
    “batman.test.com” as a bucket and not “batman”

    NOW you can follow the rest of these tutorials

    3) Your name should be a new CNAME entry for your domain. For my example of batman.test.com, the entry was:

    * Name: batman
    * Type: CNAME
    * Value: s3.amazonaws.com.
    * (If you are an European users, use s3-external-3.amazonaws.com. instead)

    Thats it!

    http://test.batman.com/loads/bucket/subfolders/logo.png

    Enjoy!!

  • [...] Update: If you would like to use your own domain for accessing your files stored on S3 (e.g. cdn.mydomain.com instead of my-assets.s3.amazonaws.com), then you’ll find this article useful: How to Alias a Domain Name or Sub Domain to Amazon S3 [...]

  • Great article, just what I was looking for. Thanks to Chris @ stillbreathing.co.uk for pointing it out.

    One question, can you ftp and interact with a bucket and its content? I need to be able to upload and delete content oo the subdomain/bucket via scripts excuted from my main server.

    Matt

    • Carlton Bale says:

      S3 doesn’t support FTP. You’ll need a program that’s implemented the S3 protocol and supports a command line interface, such as S3 Sync for Ruby.

      • Louis says:

        Bucket Explorer keeps crashing on me. I use a Mac (Lion) and found that Transmit works the best for me.
        It acts like an FTP client (well it is actually an FTP client} only the buckets i have to create through the AWS web interface

  • ROW says:

    Hi Carlton,

    That was an amazing explanation. Thanks a lot.

    A point regarding “final steps”:

    1) I recommend setting the bucket permission to “full control by owner” only and setting the permissions of the files within the bucket to “full control by owner, read access for everyone”. This will prevent people from being able to browse/list the files in your bucket.

    2) If you don’t want Google (or Google Images) to index the files in your subdomain, create a file named robots.txt containing the following and copy it into your bucket:

    Given the settings you did in #1 I think you can leave out #2. Reason being, If you set “full control by owner” ONLY for the bucket then googlebot will NOT be able to read the robots.txt that is placed in the bucket. Hence whether you have the robots.txt or not it won’t make any difference.

  • ROW says:

    Well, I take back what I said above because I am able to access the robots.txt even if my bucket is not set to read for all. This is out of my understanding. May be Carlton you can throw some light?

    Now given this, I am just setting my robots.txt to be read by everyone while other files will only use time limited expiring URL’s.

  • komin says:

    Just what I needed, thanks!

  • Cory says:

    Your a lifesaver dude!

  • Mike Kelly says:

    Thanks for this – this is close to what I want but I see that you’re just using this for some of your website content. What if I wanted to use S3 to host an Internet-facing SFTP site where clients could copy data using any SFTP client? Anything different or do I just set up an sftp CNAME where you set up an S3 CNAME record? I worry about the security issue mentioned above when redirecting SSL traffic to S3 – I wonder if that will apply to SFTP (which I understand only a bit). Thanks for any thoughts.

    • Carlton Bale says:

      S3 does not support SFTP. It only supports the Amazon protocols, which are documented and available. I think there is HTTP upload code that can be embedded in a web page and that is probably the closest to what you want. Unfortunately, I don’t have any more details than that.

  • Tobias Rohrle says:

    Thanks Carlton for this tutorial.

    Here is my feedback :
    We used this system to host our newsletter images.
    After some weeks we rolled back to “native” s3 domain (something like “mybucket.com.s3.amazonaws.com”) because “CNAME tricks” requires an additional DNS request (performance concerns)

    Today we detect that having 2 times “.com” in domain name is considered as a phishing technic by some anti-spam process :
    “mybucket.com.s3.amazonaws.com”

    That’s why we are finally copying the existing bucket to a new one named without “.com”

    Hope this helps someone.

  • Drew says:

    Thanks for writing this, I have some familiarity with web servers and DNS. I have created a full featured content distribution service using S3, and was using the bucket_name.s3.amazonaws.com convention, now this is a much better solution, because I don’t want my customers to know how im delivering content to them.

    Very well written, and very good resource for all levels of programming knowledge.

    As far as those who want to use https with their content, there are only 2 was I can think of to do this offhand, The first obvious choice would be to used amazons EC2 service to run your web server, or get a maybe getting a wild-card SSL cert may do the trick, because they shouldn’t complain about sub domains. What I do is have a script that parses the content via CURL prior to it being embeded in the visitors browser, and because CURL allowes you to seamlessly modify headers which happen to contain the SSL cert the browser thinks everything is comming from your server, thus no certificate errors. It takes a couple of extra seconds for the stream to buffer, but its worth it as the content is still being pulled directly from S3. This works really well with SWF files, large graphics, MP3′s, or any other media one might stream.

  • Drew says:

    just a follow up on my previous comment, for those who are looking for FTP to S3 this is entirely possible, as a matter of fact I am considering creating an online FTP portal for S3, that would look like a traditional FTP program but my server would handle all of the S3 bucket requests. I would of course charge a small fee for this service if enough people were interested.

  • Khal says:

    Why would I pay you? (@drew).. when CloudBerry (http://cloudberrylab.com/) do a similar thing for free?

    • Carlton Bale says:

      I believe Drew is talking about something that runs on the webserver that acts as a FTP server, but actually saves the files to Amazon S3. As I understand it, CloudBerry is client-side (Windows only?) software.

      • Khal says:

        yeah, my bad, CloudBerry is client side on Windows. I have actually moved on since my original posts on this thread. I am now on RackSapce Cloud Files. It uses Akamai and although it does not yet have PoP.. its brilliant. Highly recommend.

  • Drew says:

    Carlton is correct. A cross-platform webapp would make the most sense. I would imagine that if there was enough interest in such an app it would make sense to create it, it would also make sense that the creator would be entitled to be reimbursed for EC2 and S3 costs asociated with running the app.

  • [...] http://carltonbale.com/how-to-alias-a-domain-name-or-sub-domain-to-amazon-s3 August 21, 2010 4:05 am Jesper Mortensen AFAIK there is nothing special to it. In your public DNS just create “your-name.your-domain.com” as a CNAME to “your-bucket-name.s3.amazonaws.com”. [...]

  • Tim D says:

    Hi,

    I’m new to AWS and web programming. Your article is very useful and I got most of the points. There only are few things that I would appreciate if someone could explain to me.

    I was wondering what would make people choose AWS if the entire domain should not be forwarded? Most web servers nowadays provide enormous amount of storage along with domain managing tools. Wouldn’t it be more cost efficient and more convenient if I keep the data on the web server I’m hosting with? Also, if I forward my domain/sub-domain to S3, would my php code and all the .htaccess or php.ini rules still work?

    Sorry if my questions sound stupid. I’m still an apprentice in this e-commerce world :D

    Thanks!

    • Can’t answer your point regarding htaccess and php.ini as I’m not sure if AWS allows access to that.

      In regards to the pros of AWS, back up and redundancy is one of the primary plus points. Most hosting providers will not offer a back up service as part of a hosting package, certainly not with VPS solutions, and those that do will charge extra, or just simply offer RAID. And even then it doesn’t guard against server outages.

      Traffic is another benefit. If your app or web site has a massive traffic spike (known as the Digg effect), AWS automatically adjusts the resources allocated to you in order to cope with the spike, whereas a standard host provider will not, or in a worse case scenario the VPS or server will just full over.

      If like Twitter, you run a high traffic app, images and videos are best stored on an AWS instance. This way you only need to worry about the resources you use, when you use them. Unlike if you had your own server or hosting solution, in which case you would have to estimate the amount of resources you’ll need, and provide them upfront in order to handle any increase in demand. Such as high bandwidth allocation, powerful processors, large amount of memory.

      To give you an idea of server usage, I have a dedicated server co-located in a nearby server house. This server is used for low traffic sites (70k+ page views a year), but mainly for developing and testing web projects and apps in a real-world situation. Currently the server handles around 19GB of data transfer a month, and I have a max of 300GB of bandwidth per month allocated to that server.

      Estimated total cost including software leasing, and co-location costs: £1500 per year plus the cost of the server.

      It all really depends on what your site does. If it is a basic site, low to medium usage then a stand hosting solution or VPS is fine. If your building a social network, or medium to high demand video, audio or image service, then you might want to consider a cloud based hosting solutions.

  • Tim Brandes says:

    Thank you Carlton, great article which helped me a lot!

  • Rajesh says:

    Thanks Man… great… hats off

  • bangfruit says:

    You made this easy to understand. Keep Bangin’

  • [...] This step has a lot of different options because every domain name registrar or control panel system is different. Basically, you need to set a CNAME record for your domain to point to your AWS URL (without the http://). Here’s a shot of how you would do this at SustainableDomains.com . You can also forward the A record (example.com, without the www) to your CNAME record (www.example.com) at most registrars. There is a more detailed article here. [...]

  • Edward says:

    About a year ago, you left a comment on the following page:

    http://www.marketingtechblog.com/technology/wordpress-amazon-s3/

    You said…

    “I should add, you will need to point your CNAME to the new

    your_unique_cloudfront_distribution_name.cloudfront.net

    instead of to

    your_unique_subdomain.s3.amazonaws.com

    But after that, you treat it just like a normal S3 bucket.”

    My question is this:

    I have set up my Cloudfront CNAME in my DNS settings just as you suggested on that page.

    Now, how can I use the Cloudfront Service and still use a subdomain on my s3 account?

    For example,

    While I am using the s3 Service I want it to say something like

    my_subdomain.my_domain.com.s3.amazonaws.com/myfile.txt

    Then I can remove the s3.amazonaws.com part and just use the

    my_subdomain.my_domain.com/myfile.txt and I can access my file.

    How do I do the same for the Cloudfront Service? (I don’t necessarily want people to be able to see that I am using s3.)

    For example,

    While I am using the Cloudfront Service I want it to say something like

    your_unique_cloudfront_distribution_name.cloudfront.net/myfile.txt

    Is there a way to remove the

    your_unique_cloudfront_distribution_name.cloudfront.net

    part and just use the

    my_subdomain.my_domain.com/myfile.txt

    like I did with the s3 Service? (So that it looks like I am not using s3?)

    Last, on the post located on this page, you said… “it took about 2 hours for my subdomain to be recognized by AmazonAWS.”

    How exactly can you tell when your new subdirectory is up and running when you are using the Cloudfront Service? Are the 3 s3 URLs you mentioned on the post located on this page different for the Cloudfront Service, or are Cloudfront URLs the same as the s3 URLs?

    Thanks for your help.

    Edward

    • Carlton Bale says:

      Edward, as I recall, you do everything the same, but you designate the bucket as a CloudFront bucket instead of the default bucket type. You then get the CloudFront features and pay a slightly higher price. I wish I could be more specific about how to designate a CloudFront, but it’s been a while since I’ve done so.

    • Carlton Bale says:

      I should add that you do have to create a new S3 CloudFront bucket, and then a CNAME that points it. For example, s3.carltonbale.com points to CNAME d18******.cloudfront.net.

  • Scott Judson says:

    Can you please clarify for me do I set my cname to point to my bucket or just to the s3.amazon.com?

    When I ask the WP Amazon plugin to verify that it is set up it says no and add my.bucket.com CNAME my.bucket.com.s3.amazon.com

    Love to get thsi working

    Cheers.

    • Carlton Bale says:

      Here’s the deal. You setup your bucket as normal and store your files in (my bucket name is s3.carltonbale.com). To create a CloudFront from that bucket, you go to your AWS S3 control panel and create a new CloudFront and set the origin bucket (in my case s3.carltonbale.com.s3.amazonaws.com.) At this point, it will give you a new domain name for the CloudFront, something like DA300348sdfs.cloudfront.net. You then then configure that CloudFront in the CloudFront control panel and and assign a CNAME, which is s3.carltonbale.com for me (so now both S3 bucket and CloudFront are set to respond to that CNAME.)

      Now you can use either the S3 version or the CloudFront version of the shared files. So if I go to my webserver and point the s3.carltonbale.com CNAME to s3.amazon.com, I get the files via S3. But if I set the webserver CNAME to DA300348sdfs.cloudfront.net, I get the CloudFront version.

  • elanthalir says:

    thanks for the writeup, steps you have given are working perfectly if use sub-domain, that’s aws.mydomain.com to aws.mydomain.com.s3.amazonaws.com

    but i want to use aws as my main hosting, like mydomain.com point to mydomain.com.s3.amazonaws.com

    now i’m using domain forwarding for this but as you know this is revealing my aws url.

    when i try to configure CNAME for my domain for mydomain.com my DNS register control panel always gives error like already CNAME exists.

    is there any other like A address or some other ways i can use to use aws as my root domain.

    thanks in advance

    • Carlton Bale says:

      I don’t know of a way to do this, and I probably wouldn’t recommend it. As I recall, there is no way to make AWS display a default page or file, so if someone goes to yourdomain.com, they will receive an XML file listing instead of showing the default index.html file dedicated web servers would show. 

  • elanthalir says:

    @carlton thanks for your reply

    its possible now – http://aws.typepad.com/aws/2011/02/host-your-static-website-on-amazon-s3.html

    i have done that,
    1. create bucket with like http://www.mydomain.com.s3.amazonaws.com
    2. create CNAME with host www and point to http://www.mydomain.com.s3.amazonaws.com

  • [...] subdomain" in google and found many different instruction sets on how to do it. (Here's one: How to Use Your Own Domain Name with Amazon S3 | CarltonBale.com) Once done you then have a subdomain which points directly at your s3 bucket and you can reference [...]

RSS feed for comments on this post.


Leave a Reply

If you have a comment or question, please post it here!

CarltonBale.com is powered by WordPress | View Mobile Site | © 1996-2014 Carlton Bale