Enabling Secure,Fast & Cost Effective Data Transfers To The Cloud.

As we work with our customers with their cloud adoption and expansion, getting on-premise data to the cloud in a fast, secure and a cost effective way has been one of our key focus areas.

Early this week our cloud partner AWS announced a couple of new services & enhancements that help customers move their data from on premise to the cloud in a fast and an efficient manner.We took these for a test drive and are excited to bring it you.

Starting with S3 Transfer acceleration – In addition to directly uploading your data to an S3 endpoint location, it is now possible to use an edge location as a bridge before uploading your data S3. Since the number of edge locations are significantly more and the possibility of a edge location being closer to you than the actual S3 location is higher, the data transfer acceleration is natural. The S3 transfer acceleration is a bucket specific feature.

Enabling S3 transfer acceleration is easy via the AWS console. You can find this new option under the bucket properties section in S3. Just click on the enable button to enable / suspend transfer acceleration.

It is important to note here that transfer acceleration is useful only in cases where you are closer to an edge location than the actual availability zone where your S3 bucket resides. You can check this by using a neat utility that will check your transfer speed to S3 using transfer acceleration (edge location) and without it, i.e. directly to S3. You can find a link to the tool also under the bucket properties section.

Once transfer acceleration is enabled you will get a new endpoint to use for s3 transfers using edge locations, similar to the one below

ctrworld-public.s3-accelerate.amazonaws.com

We can use this new endpoint for faster data transfers to S3, however there will be an additional charge for this.

You can also continue to use your previous endpoint of your bucket for regular S3 transfers.

http://ctrworld-public.s3-us-west-2.amazonaws.com

AWS Snowball – During the re:Invent last year AWS introduced it’s snowball appliance service, to help streamline the use of hard disk based disk transfers. It allowed customers to rent out a secure hard disk from AWS and use it for data transfer rather than purchasing a compatible hard disk. This year, snowball is available in even more regions and with larger configurations. We strongly encourage our customers to use snowball for large volume data transfers.

Architecting for Cloud Security: Bastion Hosts

When it comes to cloud security there are a number of concerns that come up. These can be broadly classified into two categories. The security threats faced by the customers who have their applications and data on the cloud and the threats faced by the cloud vendors. The later is beyond the control our control and most cloud providers today go through a rigorous compliance and certification process to address these. As such cloud security is almost always a shared responsibility between the cloud vendor and the customer.

This post is the the first part on our series focusing on cloud security from a customer perspective where we cover secure access. More specifically we will look at a effective way to tighten the access to your resources on the cloud.

Bastion Hosts – A bastion host is an instance that resides within a public subnet or DMZ within your cloud network. These are typically accessed using SSH or RDP. Once you connect to your bastion host, the bastion host then establishes secure connectivity to other other instances with your private network.

In order for the bastion hosts to provide an effective security layer we need to properly configure the security groups and Network ACLs. In a typical scenario a bastion host acts as a bridge to allow you a secure access path to other instances within your private subnet. The image above is a representation of a bastion host architecture.

Private Keys and SSH Forwarding – By default, Linux instances in EC2 use SSH key files for authentication instead of SSH usernames and passwords. We strongly recommend our customers to use key pairs as they can reduce the chance of hackers trying to guess the username and passwords. This however, presents a challenge with bastion hosts as the private keys are typically stored on the client laptops securely and are needed to access the instances over ssh. It would not be a prudent solution to transfer these keys to the bastion hosts either.

One solution would be to enable ssh forwarding. This will allow the ssh keys to be transfer to the bastion hosts and from there on be used to login to the remote machines.

Most ssh clients have the ssh forwarding enabled by default. Before you can use that, you will need to add your private keys to your ssh agent as below

Adding private keys to your SSH agent

myMBP:~ samx18$ ssh-add -k MyCTREC2Key.pem

You can list the keys that are add to your agent as below

myMBP:~ samx18$ ssh-add -l 2048 41:f7:54:7d:41:26:99:26:b0:3c:09:62:6a:3d:70:42 /Users/XXXXXXXXXX/MyOregonEC2Key.pem (RSA) 2048 e2:bb:2c:6a:69:0f:93:09:43:83:f7:f8:2a:36:89:52 /Users/XXXXXXXXXX/MyCTREC2Key.pem (RSA) 2048 cd:9e:1f:99:a2:c1:9a:e0:5d:80:11:c1:0e:65:9a:79 /Users/XXXXXXXXXX/SamOregonEC2.pem (RSA) 2048 e6:d6:78:13:2e:6b:e0:e5:3f:83:f6:f8:c7:54:c3:d7 /Users/XXXXXXXXXX/RealEBSKey.pem (RSA)

Now you can use the -A option to forward your keys to the bastion hosts

myMBP:~ samx18$ ssh –A ec2-user@<bastion-IP-address>

Once connectivity is established to the bastion host, you can ssh to the instances within your private network via a simple ssh as below

ssh ec2-user@<IP-private-subnet-instance>

The ssh-agent will automatically cycle through all the keys available in its key chain including the one that was forwarded to it from your laptop until it finds a match with with it can log on to the instance.

Lastly as with everything else in your architecture, make sure you build in redundancy for your bastion hosts to ensure that you do not lose access to your instances in case you lose the bastion host.

awsBot : An AI Enabled Bot For The Cloud

We love bots and we sure do love building them. Today, we are excited to unveil our very first AI enabled Bot for the cloud, the awsBot. The bot plugs right into your current enterprise communication / IM software like Slack and manages your resources on the cloud. The bot is simple, secure and highly scalable. It currently works with EC2,S3 and AWS Cloudformation.

Why we built this
We run multiple production workloads on AWS and the Google cloud platforms. As such almost always we have the AWS console and the CLI tools running in the background. Though using a combination of the console and the CLI tools you can do a lot, we still have a number of custom programs that we manually run. The CTR team is spread across the globe so we also use slack a lot. In fact we love it. We were excited to explore the possibility, of bringing the power of AWS automation to Slack. The idea was to be able to do significant portion of our AWS related work right from within slack. No commands, CLI or console.

What it does

The bot takes instructions from a authorized slack channel and carries out common AWS tasks that you would typically do using the console or using AWS command line tools. Once the user is logged into Slack to their channel , they do not need to log in again. It uses the slack token combined with AWS IAM , roles to ensure security and access. Currently the bot can do a variety of common task with EC2 and S3 like for example you can start and stop instances. The bot can also quickly list the status of your instances in a particular region. Something we use quite often. Similarly for S3 it can list out buckets and the contents of a bucket.

The bot can also take backups at an instance level and list out all available snapshots for an instance. Using cloud formation, the can create resources for you on AWS. The bot also provides live feedback as the status of your resources change in AWS, like when new resources are being deployed or when instances are being turned off, all without leaving Slack.

Core Architecture

  • The bot is built grounds up on a serverless architecture, hence it is scalable out of the the box.
  • Exposed only via REST API’s making integrations seamless and quick.
  • Built in security via AES 256 bit encryption and TSL/SSL (see below for additional details). No credentials are ever saved.

Security

Security was one of the other primary challenges we had when we initially started working on this. Even before we had written the first line of code for the bot, we knew we needed to implement some of kind encryption along with roles. We have used client and server side encryption in the past, so wanted to build the same level of encryption on this bot. Currently the bot uses a key management service from AWS to securely encrypt the communication tokens to prevent any unauthorised use. In addition, all communication between the bot and AWS is over TLS/SSL.

What’s next for awsBot

  • Expanding to cover other AWS services like RedShift and RDS
  • Building additional intelligence via natural language processing
  • Adding additional capabilites to the bot beyond AWS and Slack

Reduce your AWS Costs by up to 30%

CTR AWS Scheduler

CTR’s AWS scheduling toolkit lets you manage, optimize and govern your AWS Cloud resources from a single place and reduce your AWS costs by up to 30%.

Why a scheduler ?

Optimize – Setup a schedule according to your business needs and the scheduler will manage your startup, shutdown and other maintenance tasks.
Govern – Quickly deploy a governance policy across your cloud deployments as you would do with your on premise infrastructure.
Simple – Easy to setup and scale according to your needs.
Secure – Uses standard AWS encryption and IAM roles. No credentials, access keys or tokens shared or stored.
Compatible – Out of box integration with all supported AWS resources and regions.

CTR AWS Specializations

Managed Cloud Services
Round the clock managed services let you focus on your business while we take care of the cloud operations.
Cloud Adoption
Deep dive into your business goal, design and build the right cloud solution for your organization.
Cloud DevOps
Continuous integration using tools like Chef, Puppet, AWS CloudFormation, Beanstalk, and OpWorks.
Compliance
Cloud based solutions that meet HIPAA, PCI and ITAR requirements on AWS.
High Availability
Build cost-effective, highly-available and durable cloud solutions on AWS Infrastructure.

AWS Product Announcement

If you missed AWS re: Invent in December, you may not have heard about some of the new products Amazon Web Services has released.

AWS OpsWorks

AWS OpsWorks is a configuration management service that uses Chef, an automation platform that treats server configurations as a code. OpsWorks uses Chef to automate how servers are configured, deployed and managed across your Amazon Elastic Compute Cloud (Amazon EC2) instances or on-premises compute environments. OpsWorks has two offerings, AWS Opsworks for Chef Automate, and AWS OpsWorks Stacks.

VMware on AWS Cloud

Customers will be able to use VMware’s virtualization and management software to seamlessly deploy and manage VMware workloads across all of their on-premises and AWS environments. This new offering will allow customers to leverage their existing investments in VMware skill sets and tooling to quickly and seamlessly take advantage of the flexibility and economics of the AWS Cloud.

Schema Conversion Tool

The AWS Schema Conversion Tool (SCT) can now convert Netezza and Greenplum data warehouse schemas to their equivalent in Amazon Redshift giving you the flexibility to move to AWS’ fast, fully-managed, petabyte-scale data warehouse.
For existing Amazon Redshift users, SCT can now analyze your current schema, automatically collect statistics, and run an optimization action to suggest difference distributions and sort key columns to optimize performance. These suggestions can be summarized in a report and applied to a cluster.

CodeBuild

AWS CodeBuild builds and tests code in the cloud. CodeBuild scales continuously, so your builds are not left waiting in a queue. You are charged by the minute for the compute resources you use. You can also use CodeBuild with other AWS services. For example, you can plug CodeBuild into AWS CodePipeline, which automates building and testing code in CodeBuild each time you commit a change to your source repository.

If you would like to learn more about Amazon Web Services and how it can help your business contact Nash Naidoo at nnaidoo@ctrworld.com or at 714.665.6507 x60

Now Open! AWS Canada Region

Our new Canada (Central) Region is now available and you can start using it today. AWS customers in Canada and the northern parts of the United States have fast, low-latency access to the suite of AWS infrastructure services.
As part of AWS’ ongoing focus on making cloud computing available in an environmentally friendly fashion, AWS data centers in Canada draw power from a grid that generates 99% of its electricity using hydro power