85 Comments

  1. Great tutorial. Thanks! 🙂

    The Amazon docs didn’t go into detail so I was left wondering how to do it. This helped out a lot.

    One thing: “s3.amaonaws.com.”, it should be “s3.amazonaws.com.” It almost tripped me up when I copied and pasted.

  2. Step 5.2 is incorrect. It’s your_bucket_name.s3.amazonaws.com. Insert “s3.”.

    Thanks
    Matthias

  3. Thank you very much for posting this article, it’s a VERY important thing, and still after reading AWS articles & docs I had no clue whether it’s possible or not.

    I’m really excited about opportunities given by the S3, but it was quite upsetting to think that one couldn’t use own dns there.

    It’s one thing if you start a project that uses Amazon’s brand (by mentioning AWS in its description and URLs of the files) to gain some credibility – but for many serious projects that have their own brands etc it’s better not to confuse users with links to Amazon.

    And the the Google problem that’s mentioned by you is a very important issue as well.

    So, again, thank you very much for this article!

    1. actually my problem is this i want to create a subdomain how couls i create in ec2 amazon.com actually i want to host my 3 projects and i searched alot but no success.. now could u plz tel the steps how to do.

      and here my domian name is:
      http://ec2-122-248-220-105.ap-southeast-1.compute.amazonaws.com

      now to create subdomian so that i can host my multiple projects.
      thanx in advance..

      1. @gurpreet singh gill, AWS doesn’t provide general DNS services. You will need access to a DNS server in order to do this, so that you can add more A or CNAME records. The easiest way is to buy your own domain name, and then add all these records through the service provider.

  4. Thanks a million for writing this, it was exactly what I needed!

  5. Thanks for publishing your tutorial. Worked just perfectly for the last few weeks.
    Actually it still works but not for european buckets anymore.

    If you access an EU bucket over the subdomain http://server.domain.com/ you’ll instantly forwarded to:
    http://server.domain.com.s3-external-3.amazonaws.com/
    I noticed it today, so I guess Amazon changed some settings not more than a week ago.

    I hope this is interesting for you and I’m looking forward for a reply 🙂

  6. I have the same problem with a european bucket. Otherwise i guess it was a great guide 😉

  7. With European bucket, you should create the CNAME to “s3-external-3.amazonaws.com”

  8. Thanks for the clear description on how to get this to work. It works great!

  9. I’ve tried a while before I found out in your tutorial, that the bucket name must match the subdomain name – thx a lot for the description!

  10. Thanks for the great description of exactly how to handle the CNAME. The Amazon doc is not too clear 🙂

    I have this somewhat setup now with a couple of mp3 files and it goes right to the file and plays in Quicktime through FF. However, in IE, I get a Certificate Error.

  11. AWS documentation mentions that https sites will give a certificate mismatch error.

  12. Hey thanks for this, it’s exactly what I was looking for and worked perfectly.

  13. Hi

    I tried doing this. Took a few attempts .. but got it working.. both this post and this post

    http://www.wrichards.com/blog/2009/02/customize-your-amazon-s3-url/comment-page-1/#comment-135

    were never clear on things.. let me give it a shot for people who get stuck..

    You want your url like

    http://superman.site.com/

    Or

    http://coolservername.site.com/

    agreed?

    Good..

    1) Create your subdomain on your domain Cpanel folder.

    For this example, I own test.com and the subdomain I created is batman.test.com

    2) Open Cloud Explorer (free amazon s3 kit) and create your bucket.. this is the important part..

    your bucket name must be your entire subdomain name

    So create
    “batman.test.com” as a bucket and not “batman”

    NOW you can follow the rest of these tutorials

    3) Your name should be a new CNAME entry for your domain. For my example of batman.test.com, the entry was:

    * Name: batman
    * Type: CNAME
    * Value: s3.amazonaws.com.
    * (If you are an European users, use s3-external-3.amazonaws.com. instead)

    Thats it!

    http://test.batman.com/loads/bucket/subfolders/logo.png

    Enjoy!!


  14. Great article, just what I was looking for. Thanks to Chris @ stillbreathing.co.uk for pointing it out.

    One question, can you ftp and interact with a bucket and its content? I need to be able to upload and delete content oo the subdomain/bucket via scripts excuted from my main server.

    Matt

      1. Bucket Explorer keeps crashing on me. I use a Mac (Lion) and found that Transmit works the best for me.
        It acts like an FTP client (well it is actually an FTP client} only the buckets i have to create through the AWS web interface

  15. Hi Carlton,

    That was an amazing explanation. Thanks a lot.

    A point regarding “final steps”:

    1) I recommend setting the bucket permission to “full control by owner” only and setting the permissions of the files within the bucket to “full control by owner, read access for everyone”. This will prevent people from being able to browse/list the files in your bucket.

    2) If you don’t want Google (or Google Images) to index the files in your subdomain, create a file named robots.txt containing the following and copy it into your bucket:

    Given the settings you did in #1 I think you can leave out #2. Reason being, If you set “full control by owner” ONLY for the bucket then googlebot will NOT be able to read the robots.txt that is placed in the bucket. Hence whether you have the robots.txt or not it won’t make any difference.

  16. Well, I take back what I said above because I am able to access the robots.txt even if my bucket is not set to read for all. This is out of my understanding. May be Carlton you can throw some light?

    Now given this, I am just setting my robots.txt to be read by everyone while other files will only use time limited expiring URL’s.

  17. Thanks for this – this is close to what I want but I see that you’re just using this for some of your website content. What if I wanted to use S3 to host an Internet-facing SFTP site where clients could copy data using any SFTP client? Anything different or do I just set up an sftp CNAME where you set up an S3 CNAME record? I worry about the security issue mentioned above when redirecting SSL traffic to S3 – I wonder if that will apply to SFTP (which I understand only a bit). Thanks for any thoughts.

    1. Author

      S3 does not support SFTP. It only supports the Amazon protocols, which are documented and available. I think there is HTTP upload code that can be embedded in a web page and that is probably the closest to what you want. Unfortunately, I don’t have any more details than that.

  18. Thanks Carlton for this tutorial.

    Here is my feedback :
    We used this system to host our newsletter images.
    After some weeks we rolled back to “native” s3 domain (something like “mybucket.com.s3.amazonaws.com”) because “CNAME tricks” requires an additional DNS request (performance concerns)

    Today we detect that having 2 times “.com” in domain name is considered as a phishing technic by some anti-spam process :
    “mybucket.com.s3.amazonaws.com”

    That’s why we are finally copying the existing bucket to a new one named without “.com”

    Hope this helps someone.

  19. Thanks for writing this, I have some familiarity with web servers and DNS. I have created a full featured content distribution service using S3, and was using the bucket_name.s3.amazonaws.com convention, now this is a much better solution, because I don’t want my customers to know how im delivering content to them.

    Very well written, and very good resource for all levels of programming knowledge.

    As far as those who want to use https with their content, there are only 2 was I can think of to do this offhand, The first obvious choice would be to used amazons EC2 service to run your web server, or get a maybe getting a wild-card SSL cert may do the trick, because they shouldn’t complain about sub domains. What I do is have a script that parses the content via CURL prior to it being embeded in the visitors browser, and because CURL allowes you to seamlessly modify headers which happen to contain the SSL cert the browser thinks everything is comming from your server, thus no certificate errors. It takes a couple of extra seconds for the stream to buffer, but its worth it as the content is still being pulled directly from S3. This works really well with SWF files, large graphics, MP3’s, or any other media one might stream.

  20. just a follow up on my previous comment, for those who are looking for FTP to S3 this is entirely possible, as a matter of fact I am considering creating an online FTP portal for S3, that would look like a traditional FTP program but my server would handle all of the S3 bucket requests. I would of course charge a small fee for this service if enough people were interested.

    1. Author

      I believe Drew is talking about something that runs on the webserver that acts as a FTP server, but actually saves the files to Amazon S3. As I understand it, CloudBerry is client-side (Windows only?) software.

      1. yeah, my bad, CloudBerry is client side on Windows. I have actually moved on since my original posts on this thread. I am now on RackSapce Cloud Files. It uses Akamai and although it does not yet have PoP.. its brilliant. Highly recommend.

  21. Carlton is correct. A cross-platform webapp would make the most sense. I would imagine that if there was enough interest in such an app it would make sense to create it, it would also make sense that the creator would be entitled to be reimbursed for EC2 and S3 costs asociated with running the app.


  22. Hi,

    I’m new to AWS and web programming. Your article is very useful and I got most of the points. There only are few things that I would appreciate if someone could explain to me.

    I was wondering what would make people choose AWS if the entire domain should not be forwarded? Most web servers nowadays provide enormous amount of storage along with domain managing tools. Wouldn’t it be more cost efficient and more convenient if I keep the data on the web server I’m hosting with? Also, if I forward my domain/sub-domain to S3, would my php code and all the .htaccess or php.ini rules still work?

    Sorry if my questions sound stupid. I’m still an apprentice in this e-commerce world 😀

    Thanks!

    1. Can’t answer your point regarding htaccess and php.ini as I’m not sure if AWS allows access to that.

      In regards to the pros of AWS, back up and redundancy is one of the primary plus points. Most hosting providers will not offer a back up service as part of a hosting package, certainly not with VPS solutions, and those that do will charge extra, or just simply offer RAID. And even then it doesn’t guard against server outages.

      Traffic is another benefit. If your app or web site has a massive traffic spike (known as the Digg effect), AWS automatically adjusts the resources allocated to you in order to cope with the spike, whereas a standard host provider will not, or in a worse case scenario the VPS or server will just full over.

      If like Twitter, you run a high traffic app, images and videos are best stored on an AWS instance. This way you only need to worry about the resources you use, when you use them. Unlike if you had your own server or hosting solution, in which case you would have to estimate the amount of resources you’ll need, and provide them upfront in order to handle any increase in demand. Such as high bandwidth allocation, powerful processors, large amount of memory.

      To give you an idea of server usage, I have a dedicated server co-located in a nearby server house. This server is used for low traffic sites (70k+ page views a year), but mainly for developing and testing web projects and apps in a real-world situation. Currently the server handles around 19GB of data transfer a month, and I have a max of 300GB of bandwidth per month allocated to that server.

      Estimated total cost including software leasing, and co-location costs: £1500 per year plus the cost of the server.

      It all really depends on what your site does. If it is a basic site, low to medium usage then a stand hosting solution or VPS is fine. If your building a social network, or medium to high demand video, audio or image service, then you might want to consider a cloud based hosting solutions.

  23. Thank you Carlton, great article which helped me a lot!

  24. You made this easy to understand. Keep Bangin’


  25. About a year ago, you left a comment on the following page:

    http://www.marketingtechblog.com/technology/wordpress-amazon-s3/

    You said…

    “I should add, you will need to point your CNAME to the new

    your_unique_cloudfront_distribution_name.cloudfront.net

    instead of to

    your_unique_subdomain.s3.amazonaws.com

    But after that, you treat it just like a normal S3 bucket.”

    My question is this:

    I have set up my Cloudfront CNAME in my DNS settings just as you suggested on that page.

    Now, how can I use the Cloudfront Service and still use a subdomain on my s3 account?

    For example,

    While I am using the s3 Service I want it to say something like

    my_subdomain.my_domain.com.s3.amazonaws.com/myfile.txt

    Then I can remove the s3.amazonaws.com part and just use the

    my_subdomain.my_domain.com/myfile.txt and I can access my file.

    How do I do the same for the Cloudfront Service? (I don’t necessarily want people to be able to see that I am using s3.)

    For example,

    While I am using the Cloudfront Service I want it to say something like

    your_unique_cloudfront_distribution_name.cloudfront.net/myfile.txt

    Is there a way to remove the

    your_unique_cloudfront_distribution_name.cloudfront.net

    part and just use the

    my_subdomain.my_domain.com/myfile.txt

    like I did with the s3 Service? (So that it looks like I am not using s3?)

    Last, on the post located on this page, you said… “it took about 2 hours for my subdomain to be recognized by AmazonAWS.”

    How exactly can you tell when your new subdirectory is up and running when you are using the Cloudfront Service? Are the 3 s3 URLs you mentioned on the post located on this page different for the Cloudfront Service, or are Cloudfront URLs the same as the s3 URLs?

    Thanks for your help.

    Edward

    1. Author

      Edward, as I recall, you do everything the same, but you designate the bucket as a CloudFront bucket instead of the default bucket type. You then get the CloudFront features and pay a slightly higher price. I wish I could be more specific about how to designate a CloudFront, but it’s been a while since I’ve done so.

    2. Author

      I should add that you do have to create a new S3 CloudFront bucket, and then a CNAME that points it. For example, s3.carltonbale.com points to CNAME d18******.cloudfront.net.

  26. Can you please clarify for me do I set my cname to point to my bucket or just to the s3.amazon.com?

    When I ask the WP Amazon plugin to verify that it is set up it says no and add my.bucket.com CNAME my.bucket.com.s3.amazon.com

    Love to get thsi working

    Cheers.

    1. Author

      Here’s the deal. You setup your bucket as normal and store your files in (my bucket name is s3.carltonbale.com). To create a CloudFront from that bucket, you go to your AWS S3 control panel and create a new CloudFront and set the origin bucket (in my case s3.carltonbale.com.s3.amazonaws.com.) At this point, it will give you a new domain name for the CloudFront, something like DA300348sdfs.cloudfront.net. You then then configure that CloudFront in the CloudFront control panel and and assign a CNAME, which is s3.carltonbale.com for me (so now both S3 bucket and CloudFront are set to respond to that CNAME.)

      Now you can use either the S3 version or the CloudFront version of the shared files. So if I go to my webserver and point the s3.carltonbale.com CNAME to s3.amazon.com, I get the files via S3. But if I set the webserver CNAME to DA300348sdfs.cloudfront.net, I get the CloudFront version.

  27. thanks for the writeup, steps you have given are working perfectly if use sub-domain, that’s aws.mydomain.com to aws.mydomain.com.s3.amazonaws.com

    but i want to use aws as my main hosting, like mydomain.com point to mydomain.com.s3.amazonaws.com

    now i’m using domain forwarding for this but as you know this is revealing my aws url.

    when i try to configure CNAME for my domain for mydomain.com my DNS register control panel always gives error like already CNAME exists.

    is there any other like A address or some other ways i can use to use aws as my root domain.

    thanks in advance

    1. Author

      I don’t know of a way to do this, and I probably wouldn’t recommend it. As I recall, there is no way to make AWS display a default page or file, so if someone goes to yourdomain.com, they will receive an XML file listing instead of showing the default index.html file dedicated web servers would show. 


Leave a Reply

Your email address will not be published. Required fields are marked *