widdix / aws-ec2-ssh

Manage AWS EC2 SSH access with IAM

Home Page:https://cloudonaut.io/manage-aws-ec2-ssh-access-with-iam/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fallback behavior when IAM is down

zxlin opened this issue · comments

I'm sure many people noticed a very brief IAM outage earlier this week. During the outage, IAM was not responsive and as a result, this script would go and delete all of the local users synced from IAM as IAM did not return a list of users.

I was hoping to discuss what are the options for some fallback behavior in the event of IAM outage or actually just plain network connectivity outage.

if the IAM API is down the CLI will fail. As a result, the script will fail. It should not delete all users.
Why do you think that this will happen?

It happened with the IAM outage earlier this week, in /var/log/auth.log it shows aws-ec2-ssh deleting all the users during the outage. Perhaps IAM returned 200 and an empty list while it was still recovering?

Nonetheless, this is still a case we have to consider.

that's kind of unexpected.... the problem is that we can not really decide that an empty list means no IAM users or something is broken...

It would be kind of annoying to implement, but a configurable n number of "confirms" that a user is deleted before actually deleting the user would be nice.

Keep some state of the list of users and a counter of how many times a user's been missing from the returned list, on the n-th sync of it missing, delete the user.

I don't see how this would solve the problem. Depending on the length of the outage the script would still delete all the users?

Curl has lots of exit codes so you can see exactly why the download of data failed and act accordingly.

@michaelwittig n can be set to a org's fault tolerance preference and if the outage does last longer than however long the n confirm allows, then it at least gives time for the org to react to system failures.

This is not the only solution to fix it, it's just what I thought of first. I'm completely open to other ways to introduce some fault tolerance into this.

@richard-scott there is no curl involved here at all...

I saw this issue and maybe it's the problem I am facing here.
Today in the same afternoon, the import_user deleted the users twice.

@assertnotnull we log to system log (priority auth.info | tag aws-ec2-ssh) when users are created or deleted by ec2-ssh. Depending on the configuration of your OS, the logs will be placed in different log files. Usually it is /var/log/messages. Could you provide the relevant log lines?

@assertnotnull ok. and on what time was the user added again after Nov 1 20:40:20?

Same is happening to me during an outage on IAM: https://twitter.com/gbcis/status/993502731762655232
Even having the users during an outage like this means that users won't be able to log-in your system, is there a way to cache public-keys?

After having this code shoot us in the face on three discrete events, here are the changes I made to better protect myself:

  • removed the -r from the userdel instruction, so that home directories would be left in place during an outage which resulted in users being deleted
  • added extra bailouts to import_users, because, at least for our environment, YES WE CAN conclude that an empty user list means that something is horribly horribly broken
  • added generation of authorized_keys files to import_users and stripped the sshd configurations back out

@packetfairy Would you mind linking your repo? I'd love to take a look!

I am managing the code as part of a larger repo, in probably a lamer way than I should be (ie: without using submodules 😝), but here's a fork with my changes incorporated: https://github.com/packetfairy/aws-ec2-ssh/tree/condoms

@packetfairy lol branch name. But I like this a lot actually!

Maybe the fallback logic could be enabled if some IAM_OUTAGE_PROTECTION flag was enabled in aws-ec2-ssh.conf so that organizations can choose their own fault tolerance level.

Thoughts @michaelwittig?

@packetfairy is it me or you are still configuring sshd to run the script to authorize logins?

This way, users will still be able to access the instance even if IAM apis are down: the only thing that won't work is update to ssh keys.

@MassimoSporchia I removed the AuthorizedKeysCommand and AuthorizedKeysCommandUser configuration options from sshd_config on my own, but I did not update the installation/setup scripts to fit with my changes, as I manage my sshd_config separately with ansible. (Is that what you meant?)

Because, yes, as you observe: using this code, existing users should be able to access the instance even if the IAM API is down.

The bail outs from empty iam_users and sudo_users prevent good users from being deleted from the system during an outage. Having a local authorized_keys file for each user prevents authentication timeouts/errors during an outage. Given that I now had local authorized_keys files generated with current data from the API, it felt foolish to also do an API call every time a user authenticated, so I just stripped that bit out.

As an unintended consequence, the first authentication pass is now lightning fast again!

I merged the emtpy IAM user list detection from @packetfairy (Thanks!)

The ability to configure userdel (e.g. homedir removal) is tracked here #112

The caching of local users is also discussed here #114