henrysher / cob

Yet Another Yum S3 Plugin (AWS SigV4)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Problems when deploying in new AWS account

gdhbashton opened this issue · comments

Hullo! I've been using version 0.3.0 of your plugin for a few weeks and it's been working great. Today I deployed the same AMI with the plugin pre-installed onto a different AWS Account in the
same region, eu-west-1, and it was unable to read from the S3 repo I had defined:

[root@bastion-id4160830 ~]# yum -v makecache
Loading "cob" plugin
Loading "fastestmirror" plugin
Config time: 0.017
Yum version: 3.4.3
base                                                                                                                                                                         | 3.6 kB  00:00:00     
epel/x86_64/metalink                                                                                                                                                         |  27 kB  00:00:00     
extras                                                                                                                                                                       | 3.4 kB  00:00:00     
Calculating signature using v4 auth.
CanonicalRequest:
GET
/repos/puppet/repodata/repomd.xml

host:stage-yumbucket-8ynuyi1pfclm-s3bucket-1o6a82dodg7d9.s3.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20141204T151556Z
x-amz-security-token:AQoDYXdzEKf//////////wEa0APrnoe2ndekV4VWUO+L2P2djEF8976KUc4/TB+auB5oATRVXvd7DN3Ek7O6/hyHNj6GsFLjPJVdsaoKOQL+/4uP9achHV+NEM95ul6GbgLeqS9+9EmlGp5W9lDlyIf/bypAf0sp+iG2U9oqKSe6jBDbWnM0OLgGSs4JeFpxp+avX51E9R5c7WONasOdIkFDW6WuJic+3OaKdFaZj4b0k7jZutzH8iDIufeit9h8JcYuncUzXb3e8jkB8bcLPLZRZe5kFeoF7n1NVjplA5BaNwZOtLoznG2BAJvqTDOo4yPw5M7SXVJ2cVH1uOSSb21T6+mLjflUKMUfL/Yscck/J7FLSh4VLCY/nmnGQCXbR1D9Bd+CSN+YqtoUHnFGzNlUFLoNd5Fp+PAWw5iuU2R786ceUB/leFn+kMDSvPVaoqDSXUGOoFJ0y4D2ysnDCm+jrg+8rvODsdcJM0sGxcXf14MLn7IK+MQMi+/5afCpGY+kbDafY57X8WvWgpDlDwVeg6bQyrH8Nf8uab8omN1sBOBtCkvmNteIckj+6eumf+G/yTu1O9RQq6nOOpNAyMUXOCvnZjDAboz0litvimxNUoH35tHXdQaIqB/GM+XUHu/0riDr2oGkBQ==

host;x-amz-content-sha256;x-amz-date;x-amz-security-token
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

StringToSign:
AWS4-HMAC-SHA256
20141204T151556Z
20141204/eu-west-1/s3/aws4_request
b356d64147c233fa7dd8f5dbb6d3250c87176aa3cf68c16dc8bb511247618c22

Signature: 7a4611566c112616134c1950b0ab6a1a298492492bf0acad1f61bb4739d5a19d
https://stage-yumbucket-8ynuyi1pfclm-s3bucket-1o6a82dodg7d9.s3.amazonaws.com/repos/puppet/repodata/repomd.xml: [Errno 14] HTTPS Error 403 - Forbidden
Trying other mirror.
failure: repodata/repomd.xml from itvs3: [Errno 256] No more mirrors to try.
https://stage-yumbucket-8ynuyi1pfclm-s3bucket-1o6a82dodg7d9.s3.amazonaws.com/repos/puppet/repodata/repomd.xml: [Errno 14] HTTPS Error 403 - Forbidden
puppetlabs-deps                                                                                                                                                              | 2.5 kB  00:00:00     
puppetlabs-products                                                                                                                                                          | 2.5 kB  00:00:00     
updates                                                                                                                                                                      | 3.4 kB  00:00:00     
Loading mirror speeds from cached hostfile
 * base: centos.mirror.constant.com
 * epel: ftp.heanet.ie
 * extras: mirror.atlanticmetro.net
 * updates: mirror.solarvps.com
Metadata Cache Created

The instance definitely has an IAM EC2 role which allows it to access this bucket:

[root@bastion-id4160830 ~]# aws s3 cp s3://stage-yumbucket-8ynuyi1pfclm-s3bucket-1o6a82dodg7d9/repos/puppet/repodata/repomd.xml .
download: s3://stage-yumbucket-8ynuyi1pfclm-s3bucket-1o6a82dodg7d9/repos/puppet/repodata/repomd.xml to ./repomd.xml

The repo definition is this:

[itvs3]
name=itv-s3
baseurl=https://stage-yumbucket-8ynuyi1pfclm-s3bucket-1o6a82dodg7d9.s3.amazonaws.com/repos/puppet
metadata_expire=10s
enabled=1
gpgcheck=0

I found that if I embed the region name in the baseurl then it works, but I don't understand why that was not required before, and is still not required for the instances in the other AWS account.

Can you help? I did notice that resolving the hostname for each of the two buckets has a different output. From an instance in the original AWS account:

[root@bastion-i23f2e5c6 ~]# host sit-yumbucket-rbix7r3dlq8j-s3bucket-8trs8qdn31lj.s3.amazonaws.com
sit-yumbucket-rbix7r3dlq8j-s3bucket-8trs8qdn31lj.s3.amazonaws.com is an alias for s3-3-w.amazonaws.com.
s3-3-w.amazonaws.com has address 54.231.136.208

From an instance in the new account:

[root@bastion-id4160830 ~]# host stage-yumbucket-8ynuyi1pfclm-s3bucket-1o6a82dodg7d9.s3.amazonaws.com
stage-yumbucket-8ynuyi1pfclm-s3bucket-1o6a82dodg7d9.s3.amazonaws.com is an alias for s3-directional-w.amazonaws.com.
s3-directional-w.amazonaws.com is an alias for s3-directional-w.a-geo.amazonaws.com.
s3-directional-w.a-geo.amazonaws.com is an alias for s3-1-w.amazonaws.com.
s3-1-w.amazonaws.com has address 54.231.11.57

I was able to see with tcpdump that both instances were able to look up the availability-zone from the placement meta-data on 169.254.169.254 to eu-west-1.

Any ideas warmly welcomed!

Cheers,
Gavin.

Hi Gavin

Thank you for taking Cob!
You still keep the version 0.3.0, no any update from my latest code, right?
Actually in the latest version 0.3.1, we have to add the real endpoint for your bucket with "region name" included into baseurl, like "bucket-x-y-z.s3-eu-west-1.amazonaws.com", which will let your instance to access the yum s3 bucket in any regions.

If you still keep v0.3.0 installed in your instance, could you please run this command to see more debug logs:

URLGRABBER_DEBUG=1 yum -v makecache

Besides, I really recommend you to taking region name into your baseurl as it is the real endpoint for AWS S3 bucket. AWS CLI for S3, by default, only supports Auth Signature v2, which does not need "region name" into the signature. Actually for Cob, we do have enabled AuthSig v4, as it is more secure and represents the future: new region Germany (Frankfurt) with only v4 support now.

If you have any questions or problems, please tell me.
Best wishes,
Henry

Henry,

Thanks for the reply - I've come back into the office and tried to reproduce the problem by once again removing the region name from the repo URL. However, this morning everything is working fine, even after I terminate / start a new instance.

:/

I do take your comments about V4 signatures since we will be using the Frankfurt region very soon so I will move to including the region name.

Cheers,
Gavin.