Tutorial Archívum - Road to AWS https://roadtoaws.com/category/tutorial/ This is my cloud journey Fri, 14 Jun 2024 08:42:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://roadtoaws.com/wp-content/uploads/2021/03/cropped-avatar-32x32.png Tutorial Archívum - Road to AWS https://roadtoaws.com/category/tutorial/ 32 32 Upgrading Amazon Lightsail instance bundles – get an additional vCPU for free https://roadtoaws.com/2024/04/24/upgrading-amazon-lightsail-instance-bundles-get-an-additional-vcpu-for-free/ https://roadtoaws.com/2024/04/24/upgrading-amazon-lightsail-instance-bundles-get-an-additional-vcpu-for-free/#respond Wed, 24 Apr 2024 10:22:28 +0000 https://roadtoaws.com/?p=804 Starting May 1, 2024, your Amazon Lightsail prices are likely to increase. This is if you are using IPv4 addresses. The current prices can only…

A Upgrading Amazon Lightsail instance bundles – get an additional vCPU for free bejegyzés először Road to AWS-én jelent meg.

]]>
Starting May 1, 2024, your Amazon Lightsail prices are likely to increase. This is if you are using IPv4 addresses. The current prices can only be maintained if you switch to the new IPv6 instance bundles. But what if I told you that you can get an extra vCPU for free if you are an old Lightsail customer? Sounds good? 😎 Either way, the transition is not a click away. Read on to learn how to safely upgrade your Lightsail instance.

One of the biggest disadvantages of Lightsail over EC2 is that you cannot easily change your instance type. Currently, there is no one-click option to switch from a 512 MB memory bundle to a 1 GB memory bundle. AWS is aware of this issue and is working on a solution that will allow you to upgrade to a larger bundle in the future. Until then, we have to do it the hard way. 💪

How do I get 1 vCPU for free?

Previously, the cheaper Lightsail bundles only offered 1 vCPU. This was because the underlying t2 instances had 1 vCPU options. Now that AWS Lightsail has moved to the new t3 instance family, there are no 1 vCPU options, only 2 vCPU’s. While there has been a change under the hood, AWS has maintained its Lightsail pricing, so you get an additional free vCPU when you move to the t3 instance family.

Upgrading a Lightsail instance

To upgrade your Lightsail instance, you must create a manual snapshot. Under Snapshots, click Create Snapshot and give your snapshot a name. Creating a snapshot takes time, so be patient while your snapshot is being created.

While the snapshot is being created, go to the Networking tab, and if you have an IPv4 address, be sure to click Create static IP. This will allow you to keep your current public IPv4 address. Unfortunately, there is no option to keep your IPv6 address, so you will need to update your DNS settings in the future.

Under IPv4 firewall, make a note or a screenshot of your firewall rules, as they won’t be migrated either. If you have IPv6 networking, do the same for these rules as well, since you may have different IPv4 and IPv6 firewall rules.

By now, your snapshot has probably been created. Select Snapshots from the left menu and you will see your newly created snapshot. Click on the three dots and select Create New Instance. Here you can select an IPv6 bundle if you don’t need a public IPv4 address or the new 2vCPU options. There is only one restriction. You cannot select a lower bundle that you already have.

💡 Note that Lightsail instance names are unique, so you cannot name your new instance the same as your previous instance (and there is no option to change the instance name in the future). I added www. in front of my instance.

Now that your new instance is up and running, first detach your IPv4 address from your old instance and attach it to your new. Also, enter your firewall rules that you saved earlier.

Do not forget to update your DNS settings with your new IPv6 public address. Stop your old instance and test your new instance. 🧪

Cleanup

Once you have verified that the new instance is fully functional, you can perform a manual cleanup. First, go to Snapshots and delete the snapshot you created for the upgrade. AWS charges $0.05 USD GB/mo for snapshots, so if you don’t need it, just delete it. You will also need to manually delete the old instance.

Amazon Lightsail is the easiest way to get started with AWS, upgrading your instance is not. While AWS implements this feature, you must manually upgrade your Lightsail instance.

A Upgrading Amazon Lightsail instance bundles – get an additional vCPU for free bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2024/04/24/upgrading-amazon-lightsail-instance-bundles-get-an-additional-vcpu-for-free/feed/ 0
Free and easy DIY digital business card https://roadtoaws.com/2023/11/06/free-and-easy-diy-digital-business-card/ https://roadtoaws.com/2023/11/06/free-and-easy-diy-digital-business-card/#respond Mon, 06 Nov 2023 19:37:40 +0000 https://roadtoaws.com/?p=777 Recently, I wanted to order a new business card for myself and while Googling I came across dozens of startups that produce digital business cards.…

A Free and easy DIY digital business card bejegyzés először Road to AWS-én jelent meg.

]]>
Recently, I wanted to order a new business card for myself and while Googling I came across dozens of startups that produce digital business cards. After checking out several offers, I realized that the most important thing these companies lack is reliability. If you give someone a physical business card, you can be sure that they will know your information for a long time (unless they lose it 🤫). There’s no guarantee that these startups will still be around in 5 or 10 years, or that they won’t raise their fees. That is why I created the serverless-business-card.

I saw a video on YouTube about how to make your business card smart with a simple NFC sticker. The problem is that while you can program a vCard into a sticker, iOS devices don’t support them yet. The only way to get an iPhone to read an NFC vCard is to host the vCard file on the web. Then it hit me. 🤯 Why not host the vCard on AWS using only free tier resources. 😎

The obvious solution was Lambda and Lambda Function URLs since they are completely free. Plus, you can be sure that AWS will still be around in 5 or 10 years, so your digital business card will still be running.
Also, it’s very easy to update your information if something changes, you don’t have to buy a new one. Which is good for the environment too! 👍 🌎

During development I ran into issues that required creating extra policies to make it work. Since I wanted to make it as simple as possible for everyone to use it I created a CloudFormation template that creates all the resources for you.
And when you no longer need it, CloudFormation can delete all the used resources. But why would you do that when it’s completely free. 🤑🤑🤑

The code is written in Node.js 18.x and produces a v. 3.0 vCard. You might ask why not v. 4.0 and the answer is simple. Apple doesn’t support it and I wanted to make it as compatible as possible.
The other problem I faced is that according to the vCard specifications you can link an image URL as your photo, but Apple devices don’t support that either. The photo should be Base64 encoded in your vCard.
That is why CloudFormation creates an S3 bucket where you can store your photo (avatar.jpeg) and the Lambda function will convert it to Base64 and include it in your card.

Not just Apple, AWS has some weird things too. For example, when you create a FunctionURL for your Lambda function, this URL is not defined in your Lambda environment variable. To get the FunctionURL, you need to grant the GetFunctionUrlConfig role to read your function URL. Since a vCard allows you to define the source of the vCard where you can always get the latest version, I had to create a policy and attach it to the Lambda role to include the FunctionURL in the vCard.

The other issue I faced is that while you can include your Lambda code in CloudFormation, it creates an index.js file instead of an index.mjs which is required for Node.js 18.x. There is a solution to include the code in an S3 bucket and CloudFormation will retrieve the code from there, but then you are stuck with the region where your S3 bucket is. So I created two CloudFormation templates. 😀
If you want the easiest installation and don’t want to change your region, use the default template. This will run in US East (N. Virginia). If you want to host your business card in another region, use the template-with-code.yaml instead, but you will need to rename index.js to index.mjs for the code to work.

All the source code is available on GitHub under the Apache 2.0 license. See the GitHub page for detailed installation information.
Use template.yaml if you want the simplest installation.
If you want to specify the region in which the resources are created, use the template-with-code.yaml stack instead and rename the index.js source file to index.mjs.

I hope this little code is as useful as it was fun to write it. 👨‍💻

A Free and easy DIY digital business card bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2023/11/06/free-and-easy-diy-digital-business-card/feed/ 0
Creating a Serverless Mastodon Bot https://roadtoaws.com/2023/08/29/creating-a-serverless-mastodon-bot/ https://roadtoaws.com/2023/08/29/creating-a-serverless-mastodon-bot/#respond Tue, 29 Aug 2023 12:10:02 +0000 https://roadtoaws.com/?p=760 With the growing popularity of the Fediverse, I decided to take a look at what this decentralized social network has to offer for developers. I…

A Creating a Serverless Mastodon Bot bejegyzés először Road to AWS-én jelent meg.

]]>
With the growing popularity of the Fediverse, I decided to take a look at what this decentralized social network has to offer for developers.

I chose Mastodon as my primary platform because it is the most popular of all. You may choose otherwise, as these networks can communicate seamlessly with each other no matter what server you are running.

Twitter (now: X) as a commercial company has the right to restrict or commercialize its API, which can be a struggle for startups or small developers. Mastodon is not only free and open source, but also much more developer friendly. One such feature is the support of bot accounts. You are not limited at these accounts, in fact you are encouraged to use them. In Mastodon, you can specifically mark if an account is a bot, making it more transparent to everyone. 🫶

The first step is always the hardest, choosing your Mastodon server. There are many to choose from, some are for specific communities, some are geographically restricted. If you are unsure, just stick with the oldest: mastodon.social.

Create an account here and check the This is an automated account box under your profile. This will let others know that this is a bot account. Under Development, create a new application and select the appropriate permissions. Since my bot will only publish, I only selected write:statuses.

In a previous blog post I created a website for Hungarian tech conferences. I will use this as my input source. Currently this site doesn’t offer an easy way to export information, so I modified the Jekyll code to generate a CSV file for the upcoming events. This way I can parse the data more easily.

The Serverless Approach

From the title of this post, you have probably guessed that I am going to take a serverless approach. I don’t want to deal with security updates and patches. I just want this bot to work with very little maintenance.

💡 Tip: Choose arm64 as your Lambda architecture because it is cheaper to run.

There are a handful of API clients for Mastodon to choose from. Since I will be using Node.js 18.x for the runtime, I wanted to find one that was compatible with it. My choice was Masto.js, which is maintained quite frequently and supports most of the Mastodon API features.

To download CSV data from techconf.hu, I will use Axios as in my previous projects. As for parsing CSV data my choice was csv-parse (watch out there are multiple CSV parsers out there, some names may only be different with a hyphen). I then created separate Layers for each function and attached it to my Lambda function.

Making it all work

The code is pretty simple. First I download the CSV file and parse it with csv-parse. Then I set up the Toot (Mastodon’s phrase for Tweet) and publish it with Masto.js.

One problem I faced is that in Mastodon every Toot has a language variable. If you don’t set it specifically, it defaults to the one set in your Mastodon account.

💡 Tip: Since the Fediverse is so decentralized, it is a good idea to tag all your posts.

import { parse } from 'csv-parse';
import { login } from 'masto';
import axios from 'axios';

export const handler = async(event) => {
    var tweet = "Upcoming Hungarian Tech Conferences 🇭🇺\n\n";
    var conferencesThisWeek = false;
    const currentDate = new Date();
    const endOfWeek = new Date(new Date().setDate(new Date().getDate() + 7));
    currentDate.setHours(0,0,0,0);
    endOfWeek.setHours(0,0,0,0);
    var conferenceDate;
    var csv;
    
    await axios({
        url: 'https://techconf.hu/conferences.csv',
        method: 'GET',
        responseType: 'blob'
    }).then((response) => {
        csv = response.data;
    });
    
    const parser = parse(csv, {
        delimiter: ",",
        from_line: 2
    });
    
    for await (const record of parser) {
        conferenceDate = new Date(record[3]);
        if (currentDate <= conferenceDate && conferenceDate <= endOfWeek) {
            tweet += '👉 ' +record[0] + ' (' + record[2] + ')\n📅 ' + record[3] + ' - ' + record[4] + '\n🔗 ' + record[1] + '\n\n';
            conferencesThisWeek = true;
        }
    }
    
    if (conferencesThisWeek) {
        tweet += '#Hungary #Technology #Conference';
        
        const masto = await login({
            url: 'https://mastodon.social/api/v1/',
            accessToken: ''
        });
    
        await masto.v1.statuses.create({
            status: tweet,
            visibility: 'public',
            language: 'en'
        });
    }
    
    // TODO implement
    const response = {
        statusCode: 200,
        body: JSON.stringify('Hello from Lambda!'),
    };
    return response;
};

Scheduling

The easiest way to schedule a Lambda function is to use the Amazon EventBridge Scheduler. Simply select your schedule pattern and the Lambda function as the target, and it will execute your code at the given time.

Final Thoughts

Did I mention the best part? This is all free. The services I used are all covered by the AWS Free Tier (as of this writing).

Feel free to create similar bots or improve my code or just follow my bot at: https://mastodon.social/@techconf

A Creating a Serverless Mastodon Bot bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2023/08/29/creating-a-serverless-mastodon-bot/feed/ 0
Restricting AWS Lambda Function URLs to CloudFront https://roadtoaws.com/2023/02/28/restricting-aws-lambda-function-urls-to-cloudfront/ https://roadtoaws.com/2023/02/28/restricting-aws-lambda-function-urls-to-cloudfront/#respond Tue, 28 Feb 2023 10:51:31 +0000 https://roadtoaws.com/?p=689 AWS Lambda Function URLs are a great thing that fits seamlessly into AWS’s serverless vision. Combined with S3 static hosting and CloudFront, it is the…

A Restricting AWS Lambda Function URLs to CloudFront bejegyzés először Road to AWS-én jelent meg.

]]>
AWS Lambda Function URLs are a great thing that fits seamlessly into AWS’s serverless vision. Combined with S3 static hosting and CloudFront, it is the ideal platform for high performance website hosting without the hassle of managing a complex underline infrastructure.

The basics: S3 static website hosting

Hosting your static website has never been easier. With Amazon S3 static hosting, you can serve your static pages by simply uploading it to an S3 bucket and enabling public access (be sure to name your bucket as your domain name). You can find a lot of articles on the web that explain how to set up S3 static hosting, which is why I am not going to go into any further details here.

But there are limitations: S3 static hosting doesn’t support HTTPS, the de-facto-minimum for website hosting. To use HTTPS, you need to set up Amazon CloudFront. This comes with a lot of extra features like GeoIP restrictions, caching and a free SSL certificate. Not to mention, you can finally disable your S3 public access (which could be a security risk) and give limited access to CloudFront only (with a bucket policy).

Pro tip: Give CloudFront ListBucket permissions in your S3 bucket policy, otherwise the client will not receive HTTP status codes, including a 404 when trying to access non-existent content:

Mishi
{
    "Version": "2008-10-17",
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudfront.amazonaws.com"
            },
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::roadtoaws.com",
            "Condition": {
                "StringEquals": {
                    "AWS:SourceArn": "arn:aws:cloudfront::111111111111:distribution/AAAAAAAAAAAAA"
                }
            }
        },
        {
            "Sid": "AllowCloudFrontServicePrincipal",
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudfront.amazonaws.com"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::roadtoaws.com/*",
            "Condition": {
                "StringEquals": {
                    "AWS:SourceArn": "arn:aws:cloudfront::111111111111:distribution/AAAAAAAAAAAAA"
                }
            }
        }
    ]
}

Because of the caching involved with CloudFront, this is not ideal for development. You either have to test your code locally or without HTTPS enabled.  This is the main reason why I would still like to see HTTPS support in S3 in the future. 🔮

Make it dynamic

Static websites are a thing of the past. You will most likely need some kind of dynamic content. While there are a lot of services that provide functionality, like E-mail sending, Comments, that you could include in your static code to make it dynamic, you’d most likely have to write your own code. This is where Lambda Function URLs come in handy. With a simple Lambda function, you can execute code or use other AWS resources that you can invoke with a simple HTTP request in your browser. But how do you restrict it to a specific IP, domain, or CloudFront? 🤔

AWS recommends authenticating through IAM, and while this is really a secure way, it makes development challenging.  The first thing you see is CORS where you can set your origin to a domain. Unfortunately, this didn’t work for me the way I wanted it to. This doesn’t restrict your Lambda from being called from any IP. You can also set an X-Custom header here, but that doesn’t really limit external access.

Then you look for matching IAM permissions that you can attach to Lambda functions. In the available Policies you can find InvokeFunctionUrl where you can add an IP address to limit the invocation to a specific IP. This sounds great! You create a policy and attach it to your Lambda Role. Unfortunately, this does not restrict your Lambda access either.

So what was my solution? 🙋🙋🙋

1. Restrict in code

The first obvious solution is to check the source IP with your Lambda function. Here is a sample code in Node.js (you can find a similar code for other languages online):

const ipAddress = event.identity.sourceIP;

if (ipAddress === '52.84.106.111') {
  const error = {
      statusCode: 403,
      body: JSON.stringify('Access denied'),
  };
  
  return error;
} else {
  const hello = {
      statusCode: 200,
      body: JSON.stringify('Hello World!'),
  };
  
  return hello;
}

While this obviously works, you’re adding extra code to a Lambda function that’s primary role is to do something else. Not to mention that this will increase the runtime and the resources used by Lambda. Most importantly, how can you be sure that the IP you get in the sourceIP variable is really the IP the client comes from.

My biggest concern with this solution was that I not only wanted to restrict my functions to one specific IP but to the whole CloudFront distribution – so that I can be sure that it is called from one of my static pages –. With this method, it would be a hassle to maintain an up-to-date list of all CloudFront servers. 📝📝

2. reCAPTCHA

Yes, you heard it right, Google reCAPTCHA. This may sound strange at first, but this is the solution I have implemented in my work and provides the solutions to the above challenges.

Embeding the reCAPTCHA code in your static web pages is a good idea. In fact, Google recommends that you include the code in all of your pages – not just the ones that you need it, such as form validations – because that way the algorithm can more effectively detect fraudulent use. Within the lambda function, I can now validate whether or not the user really invoked my Lambda function URL from my static web page. Here is the code I use to verify the reCAPTCHA request:

const gRecaptchaResponse = event.queryStringParameters["g-recaptcha-response"];
    
    var verificationUrl = "https://www.google.com/recaptcha/api/siteverify?secret=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&response=" + gRecaptchaResponse;
    const recaptchaResult = await getRequest(verificationUrl);
    
    if (false == recaptchaResult.success || 0.5 > recaptchaResult.score) {
      return error;
    }

In conclusion

S3 static website hosting is the easiest way to start with your serverless journey. While there are obstacles ahead you can always find a serverless solution. 🏆

A Restricting AWS Lambda Function URLs to CloudFront bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2023/02/28/restricting-aws-lambda-function-urls-to-cloudfront/feed/ 0
Operational best practices for AWS Well-Architected Framework https://roadtoaws.com/2022/03/30/operational-best-practices-for-aws-well-architected-framework/ https://roadtoaws.com/2022/03/30/operational-best-practices-for-aws-well-architected-framework/#respond Wed, 30 Mar 2022 17:40:36 +0000 https://roadtoaws.com/?p=638 In a traditional hosting environment, you have to guess infrastructure needs, usually couldn’t afford to test at scale, could not justify experiments, sometimes have a…

A Operational best practices for AWS Well-Architected Framework bejegyzés először Road to AWS-én jelent meg.

]]>
In a traditional hosting environment, you have to guess infrastructure needs, usually couldn’t afford to test at scale, could not justify experiments, sometimes have a fear of change, and could easily face with an architecture that was frozen in time. By migrating to the cloud you can overcome these issues, but how do you know that the practices you follow leverages these advantages.
The AWS Well-Architected Framework provides design principles that ensure that your cloud environment is built efficiently, securely and is high-performing and resilient. 👌

The AWS Well-Architected Framework consists of six pillars:

  • ⚙ Operational excellence
  • 🔒 Security
  • ⛓ Reliability
  • 🚀 Performance efficiency
  • 💸 Cost optimization
  • 🌳 Sustainability

AWS not only provides training and documentation on the AWS Well-Architected Framework but also provides the tools you can use to monitor your cloud infrastructure.

In this blog post, I will present a method on how to test your cloud environment against the Security and Reliability pillars of the AWS Well-Architected Framework.

🔒 The Security pillar focuses on the ability to protect information, systems, and assets while delivering business value through risk assessments and migration strategies.
⛓ The Reliability pillar focuses on the ability to recover from failures and meet demand in foundations, workload architecture, change, and failure management.

Setup

AWS Systems Manager is the go-to place to gain operational insights into AWS. Here on the Quick Setup page, we can select Conformance Packs. But let’s not run so far ahead since we need to prepare our environment first. Without that the tests will fail with a not so useful error message. 🤷‍♂️

To prepare our environment we have to enable Config Recording. We can enable this by going to AWS Config and selecting 1-click setup. This will record all resources (excluding global resources) set an AWS Config role and create an S3 bucket. If you would like to fine-tune which resources you would like to record, select or create a specific role or choose a specific S3 bucket select Get started instead. Once recording is enabled we can go back to Systems Manager.

In the Conformance Packs configuration screen, we can select if we would like to check for operational best practices for the AWS Well-Architected Framework Reliability or Security pillars or both. We can schedule when to run the configuration and select our region. Once the pack is deployed the tests usually take a couple of minutes to run. ⏲

Results

AWS Config will show the results grouped by AWS services.

Clicking on an issue shows a detailed explanation.

Pricing

Pricing is based on the number of conformance pack evaluations. While AWS currently doesn’t show how many evaluations are in each pillar it’s hard to get the exact number without running it. It would be nice if AWS would have fixed pricing for Operational Best Practices conformance packs. AWS Config has a pricing example on their website that shows a total config bill.

Summary

The AWS Well-Architected Framework is a great and unique feature of AWS that differentiates itself from other cloud providers and I don’t see why it’s not yet included in the AWS Free Tier. Having a healthy cloud environment is good both for AWS and for the customer. 👍

A Operational best practices for AWS Well-Architected Framework bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2022/03/30/operational-best-practices-for-aws-well-architected-framework/feed/ 0
Installing AWS CLI on Apple silicon https://roadtoaws.com/2022/02/10/installing-aws-cli-on-apple-silicon/ https://roadtoaws.com/2022/02/10/installing-aws-cli-on-apple-silicon/#comments Thu, 10 Feb 2022 19:00:02 +0000 https://roadtoaws.com/?p=625 You’ve just received you’re shiny new Mac with an Apple silicon processor – like the M1 – and would like to install the AWS CLI.…

A Installing AWS CLI on Apple silicon bejegyzés először Road to AWS-én jelent meg.

]]>
You’ve just received you’re shiny new Mac with an Apple silicon processor – like the M1 – and would like to install the AWS CLI. As usual, you download the latest GUI installer from AWS but it prompts for Rosetta. Does this mean that the latest version only supports Intel processors? 🤔

Apple made the transition from Intel to Apple silicon relatively easy for end-users. Rosetta 2 does a wonderful job for applications compiled exclusively for x86-64-based processors to be translated for execution on Apple silicon. Since Apple silicon has been out for a while many developers provide Apple silicon compiled binaries. In fact, there are fewer major companies that don’t provide an Apple silicon version of their app. This is why some people, including myself – never install Rosetta. In this way, I can guarantee that all my apps are optimized for the new processor.

The AWS documentation says that there are three ways to install the CLI on the Mac:

  • GUI Installer
  • Command line installer – All users
  • Command line – Current user

The sad news is that all of these methods use the same macOS pkg file. This installer in this file is not yet optimized for Apple silicon but the included binaries are. This means that you have to install Rosetta just to install an Apple silicon app. Strange, indeed. 🙃 Thankfully there’s another solution that the official documentation doesn’t mention, Brew.

Homebrew is the missing package manager for macOS. You probably already use it if you would like to install apps like wget or mc. Installation is simple and straightforward, just run the following command in your terminal.

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Next run these two commands to add Brew to your PATH:

echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

The only downside I see with Brew is it sends data to Google. It does warn you about this but doesn’t tell you how to turn it off. While Homebrew maintainers say these analytics help them decide on future features and prioritize current work – and recommends them to keep it on – I am still not a fan of personal data collection, even if it’s anonymous. To turn this off simply run the following command:

brew analytics off

Now that Brew is installed you can easily install the AWS CLI by executing the following command:

brew install awscli

Voilà, the AWS CLI is now installed without Rosetta. 🤘

⚠ I should note that this workaround was needed at the time of writing this article and AWS will probably fix the installer, but until then just use Brew.

A Installing AWS CLI on Apple silicon bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2022/02/10/installing-aws-cli-on-apple-silicon/feed/ 2
Running WordPress on AWS – the cheap and easy way https://roadtoaws.com/2021/07/08/running-wordpress-on-aws-the-cheap-and-easy-way/ https://roadtoaws.com/2021/07/08/running-wordpress-on-aws-the-cheap-and-easy-way/#comments Thu, 08 Jul 2021 11:06:25 +0000 http://roadtoaws.com/?p=464 You probably heard a lot of good things about AWS and would like to start (or move) your WordPress site there, but you find it…

A Running WordPress on AWS – the cheap and easy way bejegyzés először Road to AWS-én jelent meg.

]]>
You probably heard a lot of good things about AWS and would like to start (or move) your WordPress site there, but you find it difficult to choose the right service and pricing model for it – AWS has over 200 services to choose from with different pricing models. You’re at the right place, this is the article for you! Let’s get started! 🏁

Advantages

What are the advantages of moving to AWS?

First, it has a global footprint! A large hosting provider has only about 2-3 locations where you can host your website. (Small ones have only one). And not only that, you have to decide this during signup so you are stuck with that location for the rest of your life. On the other hand, AWS has dozens of locations (called regions) to choose from and you aren’t stuck with any of them. You can have a WordPress site in Tokyo and another one in Singapore. This is good for a number of reasons: getting closer to your clients (thus they can access your site faster) plus compliance with local regulations.

The other advantage compared to hosting providers is security. AWS is built with security in mind and you can expect that if your WordPress site is set up correctly it will be reliably running. Other users will not impact your websites performance.

AWS can adapt to your current needs and you can easily add or remove resources when needed. You only pay for the resources you consume.

Lastly, you will get a free static IP for each website compared to shared hosting where you share the same IP with others. This is excellent for eCommerce websites.

Intro

The service I will guide you through is called Amazon Lightsail. It has a simplified user interface compared to other AWS services and has fixed monthly pricing. The focus of this article is on how we can have a reliable website up and running but with the cheapest option available. We will use Let’s Encrypt’s free SSL certificate compared to Lightsail CDN which is currently free for the first year (up to 50GB) but costs $2.50 USD/month later. Not to mention that if your visitors increase you may be charged $35 USD/mo. This is why we will only use services that have a fixed fee even if your visitor numbers increase. 💰

Setting up an AWS account

If you don’t already have an AWS account you can create one by visiting the AWS website and clicking in the Create an AWS Account button on the top right corner. You will be asked to provide your Email address, password, username, personal information including your phone number and your credit card information. Your card will only be charged for the services you use. It is a good idea to secure your account right after creation. Read my First things to set on a newly created AWS account post on how to enable Multi-Factor Authentication on your account.

Amazon Lightsail

Log in to your AWS account and let’s jump into Amazon Lightsail. In the top bar type lightsail and select the Lightsail service or click on this link to start it directly. If this is your first time you will be asked to select your language. Select it, and click on the Let’s get started button. You will immediately notice that Lightsail has a much more friendlier interface. 🤝

Setting up WordPress

Let’s start by creating an Instance. Lightsail will automatically take you to the instance setup with a Welcome message to start your instance. Later you can create it under the Instances tab, with the Create instance button.

Creating an instance

On the Select your instance location you choose the location based on the parameters I outlined before. The good thing about Amazon Lightsail, compared to other AWS services, is that the prices are the same in all regions. (This is not true for other AWS services.) You can freely choose a region and the prices won’t change. I will select the Frankfurt region due to GDPR.

Next, we select the instance image. Since this article is about WordPress, we will select WordPress.

Now here comes the tricky part, that you may have missed if you haven’t read this article. 🤠
You should always select Enable Automatic Snapshots. AWS doesn’t guarantee you that your instance won’t fail, and if it fails your data might be lost. This is why we enable automatic snapshots so we can recover our data easily in case of an emergency. 🦺

Select a pricing option. Choose the option that meets your needs and fits your budget.

Identify your instance with a friendly name. This is just for display purposes; it has no effect on the instance, but you cannot change it later.

Click on Create instance. Please wait a couple of minutes while your instance is being created in the background. After the instance has been created it will show the state “Running” on your Instances tab. Your WordPress site is now up and running, but there are a couple of important things we should set before going on a coffee break. ☕

Attaching a static IP

Click on the instance name and select the Networking tab. Select Create static IP. Name your static IP and click Create. You may ask why this is important when there is already an IP address associated with your instance. The problem is that this IP is from a dynamic pool. This means when you restart your instance your IP address will change, and we don’t want this. By attaching a free static IP, our IP address will stay the same all the time.

Set up DNS

Now it’s time to connect your domain with this IP address. If you don’t have a domain yet and you’re Canadian, I suggest registering a .ca domain because country code-specific endings are always favorable; but if you would like to stick with AWS, you can use Route 53 for that.

Point your domain name to the static IP we created earlier. This can be done by updating your A record with this IP.

If everything is set up correctly by entering your domain name in your browser a fresh new WordPress site will appear. 🎉🎉🎉

Setting up SSL

Having an SSL certificate is mandatory nowadays and there are several ways to achieve this. I will be showing how to set up Let’s Encrypt, which is a free SSL certificate authority. For this, I have created a simple script that does the heavy work for you. You can find this script on GitHub. To set up SSL first log into your instance via SSH. This is done by selecting the Connect tab on your instance and clicking on the Connect using SSH button. You will be directed to a terminal window but don’t worry we won’t spend much time here. 😀

A new popup will appear with a terminal. We will paste a simple script in the terminal or you can type it manually. Whichever you prefer.

To paste the script you will find a clipboard icon on the bottom right corner of the window. Click on it and paste the following script in it, then click on the terminal and enter CONTROL + SHIFT + V (or COMMAND + V if you are on a Mac).

wget -O - https://raw.githubusercontent.com/suhajda3/lightsail-ssl/main/lightsail-ssl.sh | sudo bash

The script will ask for your domain name and email address. If everything is set up correctly the script will update your system, set up Let’s Encrypt, and will auto-renew it every 90 days, plus it will display your WordPress credentials. Everything we need. 😎

You can run this script any time you would like to update your system.

📝 Make note of your username and password because we will need this later.

Type exit to log out of the terminal and close the window.

Securing your instance (optional)

Since we don’t need terminal access to the instance all the time, it is a good idea to disable SSH access. To do this, head over to the Networking tab and click on the trash icon next to the SSH row. Be sure that HTTP and HTTPS are still there because without them we couldn’t access our site. When you would like to run the script again or get terminal access, add the SSH rule again. 🔒

Finalizing your WordPress install

Our WordPress instance is now set up correctly. Log in to your WordPress site by adding /wp-login.php to the end of your URL. Here you can log into your site with the credentials that the script displayed before.

Before you leave we should change one last thing. Add your Email address in case you lose your password. On the top right corner select Edit Profile and change your Email address then click Update Profile at the bottom of the page. Next click on Settings, General on the left side and change the Administration Email Address as well.

Your WordPress site is now up and running. Congratulations! 😌 🥳

A Running WordPress on AWS – the cheap and easy way bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2021/07/08/running-wordpress-on-aws-the-cheap-and-easy-way/feed/ 4
Enable logging in API Gateway https://roadtoaws.com/2021/06/21/enable-logging-in-api-gateway/ https://roadtoaws.com/2021/06/21/enable-logging-in-api-gateway/#respond Mon, 21 Jun 2021 17:39:53 +0000 http://roadtoaws.com/?p=395 Now that our Amazon API Gateway is up and running it is crucial for us to detect any errors or misusage. Our Lambda functions already…

A Enable logging in API Gateway bejegyzés először Road to AWS-én jelent meg.

]]>
Now that our Amazon API Gateway is up and running it is crucial for us to detect any errors or misusage. Our Lambda functions already have logging enabled by default and we can see the possible errors and usage metrics under each function’s Monitor tab. On the other hand, our API Gateway doesn’t have logging enabled by default. In this episode, we will set up logging for that as well.

CloudWatch settings

Different logging settings can be applied for each API stage. That is why we find the CloudWatch settings under Stages -> [stage name] -> Logs/Tracing.

For CloudWatch logs we can select from two logging levels: INFO to generate execution logs for all requests or ERROR to generate execution logs only for requests that result in an error.

We have the option to log full requests/responses data by selecting the appropriate checkbox.

Also here we can enable detailed CloudWatch metrics.

Let’s say we have never enabled API logging before. In this case, when trying to save our changes we will get the following error:

CloudWatch Logs role ARN must be set in account settings to enable logging

CloudWatch permissions

The above error appeared because we have not yet set up the CloudWatch log role ARN under Settings.

❗ Keep in mind that API settings are global. They apply to all of our gateways. Changing the CloudWatch log role ARN in one API Gateway will change it on all of our gateways provided that we are using the same region!

Let’s try adding our previously created role: simple-api-role ARN. You get the ARN from the IAM console -> Roles, and then selecting simple-api-role.

Upon adding our ARN we get another error: 🤯

The role ARN does not have required permissions configured. Please grant trust permission for API Gateway and add the required role policy.

Our role is not yet configured to write to CloudWatch. Let’s go back to IAM and update our simple-api-role with the proper permissions.

First, we need to attach the AmazonAPIGatewayPushToCloudWatchLogs policy to our role. We have done adding policies to roles before. If you have stuck go back to the Adding a new Lambda function to an API Gateway post where I described how to attach a new policy to an existing role. But we are not done yet… ⏱

On the Trust relationships tab click Edit trust relationship and add apigateway.amazon.aws.com. If you only used Lambda with this role this example policy document will work for you:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": [
            "lambda.amazonaws.com",
            "apigateway.amazonaws.com"
        ]
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Now that the permissions are properly configured we can go back to the API Gateway and add the role without any errors. 🤠

Finish

We have set up the CloudWatch log role ARN now it’s time to enable logging in our API Gateway.

When we enable logging in the /aws/apigateway/welcome log group we will see a new log entry: Cloudwatch logs enabled for API Gateway. This means we have done a great job! 🥳 Unfortunately, the log message doesn’t say for which gateway but based on the timestamp we can double-check if this our gateway.

The Amazon API Gateway will generate a new log group based on the following format: API-Gateway-Execution-Logs_apiId/stageName. Here we can find the log entries for our API Gateway.

We are almost finished with our API Gateway series. But we have the most important task to last: Documentation. 📄

A Enable logging in API Gateway bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2021/06/21/enable-logging-in-api-gateway/feed/ 0
Using Lambda environment variables https://roadtoaws.com/2021/05/05/using-lambda-environment-variables/ https://roadtoaws.com/2021/05/05/using-lambda-environment-variables/#respond Wed, 05 May 2021 20:15:58 +0000 http://roadtoaws.com/?p=361 Declaring variables in the source code is ideal when we would like to use them inside that specific source file. Let’s say we have multiple…

A Using Lambda environment variables bejegyzés először Road to AWS-én jelent meg.

]]>
Declaring variables in the source code is ideal when we would like to use them inside that specific source file. Let’s say we have multiple files that we would like to use the same variable in, or perhaps we would like to encrypt the value of the variable. This is when Lambda environment variables come to help.

Lambda environment variables are a key-pair of strings that are stored in a function’s version-specific configuration. The latter is important if we use versioning in Lambda. For now, we will focus on the key-pair part, we will talk about Lambda versions at a later time.

Defining environment variables

We can specify environment variables under the Configuration tab, Environment variables section.

Clicking on Edit we can set the key and value of the environment variable. For this tutorial let’s create two variables: one for storing a username and another one for storing a password.

After clicking Save our environment variables are created and are available through the Lambda runtime. We can access them with the process environment, like this:

const username = process.env.username;
const password = process.env.password;

Creating a key for encryption

Our password is a piece of very sensitive information and we would like to modify our code that only our Lambda code can decrypt it. Environment variables support encryption with AWS Key Management Service (KMS).

Let’s go to the KMS Console and create a new key. Under Customer managed keys we click Create key.

We then configure the key type to Symmetric and name our key “simple-api-key” under Alias. You can change the alias at any time. For educational purposes let’s not define key administrative permissions and key usage permissions for now.

Encrypting Lambda environment variables with KMS

Now when we go back to Lambda let’s check the Enable helpers for encryption in transit option. A new Encrypt button appears next to each variable. When clicking on Encrypt we can now select our newly created key.

Decrypting our variable

If we look at our variable or print out it’s value from Lambda we would see something like this: AQICAHhc385PwJyf/tV5ZOhskZFcr5b6NMe/u3YFxJEWOhlnxQG776g/ozncvTV1p5KoSQucAAAAZzBlBgkqhkiG9w0BBwagWDBWAgEAMFEGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMGdpuISr9cRZoNj8TAgEQgCTHd1A1f6zmXa7cCbt8Q9UJqSetCvZ6m/I8VZuLC54k/0934ZE=

In order to decrypt it we need two things:

  1. decrypt the variable with the KMS Decrypt operation 🔓
  2. grant our function permission to call the KMS Decrypt operation 🔑

First let’s modify our code to decrypt our variable. Here is a sample code:

const plainUsername = process.env.username;
const encryptedPassword = process.env.password;
let decryptedPassword;

if (!decryptedPassword) {
    const kms = new AWS.KMS();
    try {
        const req = {
            CiphertextBlob: Buffer.from(encryptedPassword, 'base64'),
            EncryptionContext: {
                LambdaFunctionName: process.env.AWS_LAMBDA_FUNCTION_NAME
            },
        };
        const data = await kms.decrypt(req).promise();
        decryptedPassword = data.Plaintext.toString('ascii');
    } catch (err) {
        console.log('Decrypt error:', err);
        throw err;
    }
}

When executing this code we will get an error in CloudWatch:

“errorType”:”AccessDeniedException”,”errorMessage”:”The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.”

In order to decrypt the environment variable, our function needs access to the key that we used to encrypt it. Let’s go back once more to the KMS Console and modify its Key users. We now add our Lambda execution role to the key users. In our case simple-api-role.

And Done! We have successfully created Lambda environment variables that we can now use in multiple source files and secured our password with KMS! 🌩⚡

A Using Lambda environment variables bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2021/05/05/using-lambda-environment-variables/feed/ 0
Controlling API Gateway access with Cognito https://roadtoaws.com/2021/04/14/controlling-api-gateway-access-with-cognito/ https://roadtoaws.com/2021/04/14/controlling-api-gateway-access-with-cognito/#respond Wed, 14 Apr 2021 18:47:00 +0000 http://roadtoaws.com/?p=345 During the API Gateway series, we already created an API Gateway and a new Lambda function. We named that function simple-api-auth for reason. Can you…

A Controlling API Gateway access with Cognito bejegyzés először Road to AWS-én jelent meg.

]]>
During the API Gateway series, we already created an API Gateway and a new Lambda function. We named that function simple-api-auth for reason. Can you guess why? 🤔

Cognito User Pools

Amazon Cognito is a simple and Secure User Sign-Up, Sign-In, and Access Control tool. It can manage User Pools and Identity Pools. User pools are user directories that provide sign-up and sign-in options for your app users. Identity pools provide AWS credentials to grant your users access to other AWS services.

For our API Gateway, we will create a Cognito User Pool that will handle all of our authorization tasks, including managing usernames, passwords, and access tokens.

Let’s start with Cognito and selecting Manage User Pools. Here we Create a user pool. We name our pool simple-api-AUTH and review the Step through settings as we customize our pool. ❗Remember that we cannot change these attributes after we have created the pool. Policies and other pool settings can be changed later but attributes cannot. When we are at the “App client” settings we create a new app client for our API Gateway.

Here we set up our App client. For simplicity, we will uncheck the Generate client secret option and enable the ALLOW_ADMIN_USER_PASSWORD_AUTH that we will need for our Lambda function to access.

Our User Pool is now ready. It’s that easy. 😀

Adding a user to a Cognito User Pool

We have several options to create users in our user pool. The default settings allow users to sign themselves up. We can create a simple UI or enable other identity providers like Facebook or “Sign in with Apple”. For simplicity, we will create the user manually under Users and groups.

After we have created the user the user will receive an Email with the following information:

Your username is misi and temporary password is 00eEhtI;.

It looks like everything is ready in Cognito but if we look closely we see that the user is not yet activated. The account status is: FORCE_CHANGE_PASSWORD 😡

We cannot change this in the Cognito UI so we will do this in Lambda instead.

Connecting our API Gateway to Cognito

We now head back to our API Gateway and select Authorizers. Here we Create New Authorizer.

We select the type to be Cognito and select our Cognito User Pool that we have created earlier. You can name your token source whatever you like but for following standards, we name it Authorization.

Securing an API method with Cognito

Let’s start securing our methods with Cognito authorization. I will select the GET method inside the hello resource that we have created earlier. We have set up API Keys before for this method so I will remove the API Key required option and select Cognito for our Authorization.

If we check out our method we now see that Cognito is the Authorizer.

Preparing our auth function for authentication

When we added a new Lambda function to our API Gateway we have created an auth method for our gateway. We will use this for authentication. It’s a good idea to rely on the features that Amazon API Gateway already has, including request validations. The API Gateway can validate the query string, the headers, and the body. The latter we will discuss in a later post because it requires creating a model. Setting up query string parameters is much more simple.

Let’s supply username and password as URL Query String Parameters and mark them Required. Under the Request Validator select Validate query string parameters and headers.

The AWS API Gateway will now check for these parameters and if they don’t exist the gateway will throw an error to the user.

Don’t forget to Deploy the API.

Setting up the necessary permission for Lambda

Our Lambda function needs to access our Cognito user pool. Yes, you guessed right we are going to IAM. ✨

There is no default policy for the permissions we would like to issue so we will create a new policy for it. We need AdminInitiateAuth and AdminSetUserPassword permissions for our Lambda function to manage our Cognito user pool.

Under Policies we “Create policy” and at services, we select Cognito User Pools. Under Action we select the two permissions and under Resources we add the ARN of the Cognito User Pool.

We then create this policy and attach it to our simple-api-Role as we learned in the previous post.

Confirming the user

Let’s go back to Lambda and get rid of that pesky “FORCE_CHANGE_PASSWORD” status. For this, we will write a simple Lambda function that will change the password of our user.

This is the code I used to verify the user:

const params = {
    Password: 'password',
    UserPoolId: 'Pool Id',
    Username: 'username',
    Permanent: true
};
    
await cognito.adminSetUserPassword(params).promise();

Run the code and if we set up everything correctly Cognito will show that the account status is now CONFIRMED.

Final touches

We are almost finished! We just have to write a small code that will call Cognito for authorization. Luckily we already have a sample Lambda function that we can modify: simple-api-auth

Replace the code we had earlier with this sample code:

const aws = require('aws-sdk');
const cognito = new aws.CognitoIdentityServiceProvider();

exports.handler = async (event) => {
    const params = {
        AuthFlow: 'ADMIN_NO_SRP_AUTH',
        ClientId: 'App client id',
        UserPoolId: 'Pool Id',
        AuthParameters: {
            USERNAME: event.queryStringParameters.username,
            PASSWORD: event.queryStringParameters.password
        }
    };
    
    var authResponse = await cognito.adminInitiateAuth(params).promise();
    
    const response = {
        statusCode: 200,
        body: JSON.stringify(authResponse),
    };
    return response;
};

Deploy and we are done!

Testing our API Gateway authentication

Let’s go to Postman and see if everything is working as expected.

If we call our /hello method we will receive the following error:

“message”: “Unauthorized”

Great! We need an IdToken to access this method. Let’s call our auth method to get the token. API Gateway will check if we have the username and password params. If not, we will receive an error.

We received our token. 🥳 Now if we go back to our /hello method and set the Authorization header we will have access to our function. Be sure to use the IdToken for Authorization.

And voila! Our API Gateway is now using Cognito for authentication.

A Controlling API Gateway access with Cognito bejegyzés először Road to AWS-én jelent meg.

]]>
https://roadtoaws.com/2021/04/14/controlling-api-gateway-access-with-cognito/feed/ 0