Monday, September 26, 2022

Azure Cloud Detection Lab Project

 

Azure Cloud Detection Lab Project




How to build a business case for cloud migration

 

How to build a business case for cloud migration

If your organization is operating on a more traditional on-premise model, then it’s probably time to shake things up in a big way.

As with any IT project, if you’re planning a move to the cloud you’ll need to prepare a tight business case to secure the support, resource, and budget from key stakeholders.

Cloud is the new normal, and most businesses today have either already moved their operations into the cloud or are in the process of migrating. Newer businesses tend to operate in the cloud from day one because of the features, computing power, and scalability it offers.

Why build a business case for AWS migration?

Business cases generally come about because your organization has a specific need that it can’t meet with its current tools or resources. This thing you want to achieve could be anything from increased customer engagement to better sales, being able to get more detailed insight into your data, or simplifying processes to make your company leaner.

“Value drivers like business agility, cost and risk management influence the business case for cloud migration,” explains Avon Puri, Chief Information Officer at Rubrik.

“For any company embarking on a cloud journey, moving the disaster recovery site to cloud could be the most compelling first use case. This not only reduces the complexity and minimizes the risk of data loss, but also brings the cost down significantly by eliminating the need for operating a secondary data center. Scalability, reliability, automation, on-demand usage are additional cloud drivers.

“Migration to the cloud enables companies to instantly provision self-service dev and test environments to developers and greatly enhance DevOps.”

No matter what the reason for migrating to the cloud may be, you’ll probably need to persuade stakeholders from across your business to get on board with your plans before taking any major steps towards it. You’re essentially trying to sell cloud migration as a solution to your organization’s pain points, to everyone from C-suite right down to your junior-level colleagues.

When your completed business case crosses a decision-maker’s desk, it needs to cover everything from the provider/s and products to timelines, costs, who you’ll need on board, the problem/s it solves, and how it all fits in with your organization’s goals and vision


Common cloud migration myths

There’s a lot of mistruths out there about cloud migration, and you’re bound to come up against at least a few when you’re making your case. Here are some prevalent figments to push out of your mind before you start:

“Moving to the cloud is always cheaper.” 
This one depends largely on what kinds of costs you’re comparing, but generally speaking, operating in the cloud is cheaper in the long run. In some cases, operating costs can be higher, but that’s down to poor cost control, inexperienced employees, a weak discovery stage, duplicate processes, or an increase in staffing spend.

The good news is that you can minimize these issues and generate ROI pretty early on in the game if you do the right kind of research and invest in the right AWS migration team from day one.

“All your assets should live in the cloud.” 
Once again, that depends on your organization’s particular needs and the kind of legacy system you’re working with. Sometimes, the best answer doesn’t lie solely in one cloud provider—in many cases, a hybrid solution works best.

Before settling on this, however, it’s best to wait until you’ve carried out your first round of qualitative analysis. Find out everything you can about your traffic patterns and dependencies, and move from there.

“Server costs are all that matter.”
While the cost of running your servers on-premise is a major factor in any cloud migration, it’s far from the be-all and end-all of your financial considerations. The bottom line is this: you shouldn’t take a server-only or VM-only approach to cost-cutting.

For starters, if you’re operating an on-premise model, your CFO is going to be very interested in how the cloud will cut the costs associated with hardware refresh cycles.

Find out how much it costs to run your data centers in terms of real estate, maintenance, and manpower, and you’ll find that a good chunk of your budget goes there. Moving into the cloud, of course, minimizes these costs because you’re using AWS’s resources instead.

Speaking of data centers, you’ll want to zone in on exactly how much your downtime is costing your business, and compare that to the downtime and cost of running them in the cloud.

“Moving to the cloud is quick and easy.”
Anyone outside of the IT bubble can be forgiven for thinking that cloud migration is as easy as switching internet providers. While moving to the cloud does make life easier, it takes a lot of good old-fashioned hard work to get there first.

Migrating to the cloud means major changes across every part of your organization; when putting your business case together, remember to include realistic timelines based on:

Bandwidth

When you’re moving things over to the cloud, the bandwidth it takes to transfer that data from your data center to your provider will directly affect the overall time it takes to complete your migration. Factor this into your timeline to avoid disappointment later down the line.

Testing

Nothing slows your migration timeline down like a poorly planned testing stage. When it comes to making sure everything’s up to scratch both internally and from a customer-facing perspective, you need to factor in more than enough time for rigorous testing and any fixes that need to be applied.

The actual migration

The longer the migration process, the higher that costs can creep up and eat into your budget. Minimize the chances of creating delays by allocating resources and engaging partners as early on as possible, and factor the time it takes to get these things in place into timeline.

Assess any technical or cultural hurdles that might need to be overcome, and figure out how you can overcome them.

How to build a business case for AWS
Migration means moving a considerable portion of your organization’s existing assets to the cloud.

A solid business case can:

Help make the cloud adoption process go more smoothly
Discover more ways to attract new business and improve the experience of your existing customers
Identify various key stages and the associated adoption/migration costs
Help you understand your existing workloads and create the best plan going forward
Win the support of key stakeholders
Even at a glance, a standard business case should cover:

The projected costs of moving your operation to the cloud
The current cost of running your existing systems and infrastructure
How much moving to AWS would save your business
The benefits of AWS
The cost of building the infrastructure for new workloads
But how do you get things up and running? Let’s go through things step by step.

The executive summary
This is a quick, clean overview of and introduction to your master plan. This should cover, in short, the challenges your business faces and how migrating to the cloud is going to address those issues.

The problem statement
This part hones in on why exactly you’re presenting this course of action to the business and what your goals for this implementation are. The best way to get this bit done and dusted is by using the SPIN approach:

Situation: what’s your company’s current circumstance?
Problem: why isn’t it working out?
Implication: how does that problem impact the wider business?
Need: what do you need to resolve the issue and pave the way towards a better, brighter future?
Organize your statement with these questions in mind, and you’ll breeze through it in no time.

Outline your main objectives
This part of your proposal needs to cover what moving to the cloud will achieve. Basically, you’ll want to paint a clear, vivid picture of what your organization should look like once your solution has been implemented, and everyone has had time to adjust to the new technology.

Proposal
You’ve explained the reason for this massive undertaking, but now it’s time to talk about the inner workings of it all. This section should answer questions like ‘what exactly is AWS? How will these products play a part in achieving your long-term goals in line with your organization’s core values?

Alternatives
Those holding the purse strings will want to know that every possible alternative has been explored, so it’s worth spending some time going through the other options that you ruled out along the way.

Limitations and risk assessment
You’ve covered the ‘good’, and there is a lot of it, but now it’s time to spill the beans on the bad and the ugly.  Glossing over or completely cutting out the risks will make your proposal appear biased, and might put off the very people you need buy-in from, so don’t be afraid to outline the risks that come with implementing your solution.

What matters is that you’ve got a way to navigate those speedbumps and remedy any issues if and when they happen. This will also help you create a more realistic timeline, and that’ll help you make sure you keep a tight hold on your budget.

What kind of risks could you face? Well, here’s a quick example. When it comes to digital transformation, one of the most common risks you’ll face is poor user adoption rates, effectively leaving you with a shiny new solution gathering dust on the shelf.

This particular problem can be avoided by allocating enough resources to train your employees ahead of time, and nominate your power users who can act as their team’s point of reference when it comes to using the software effectively.

Having buy-in and visible support from C-suite also helps to drive user adoption from the top down.

Naturally, there are other implementation challenges out there that are specific to your company and the industry it operates in. Just remember this: no matter which pitfalls you need to outline in your business case, always follow up with how and why the advantages far outweigh those risks.

Cost analysis
The real crux of any true sales pitch or business case comes down to those dollars and cents. This is the time to showcase exactly how your cloud solution is going to save your organization money—how it affects ROI.

All you can do at this stage is offer thoroughly-researched predictions, backing them up with expected costs and the financial gains you stand to make in the long run. Here are a few important factors to address in this section:

Current cost of running your existing system
The long-term cost of sticking with an on-premise model
Cost of opportunities lost as a result of using a more traditional system
Comparison of projected costs on current system against cost of proposed solution (Remember to calculate potential ROI over the next three-five years)
Total cost of ownership (i.e. including training, software licenses, implementation, cloud storage, development if customization is required)
Implementation plan
To make your business case airtight, create a timeline complete with deadlines for each stage of the process, roll-out dates, and who’ll need to be involved along the way. This way, you’ll show that your proposal is actually achievable!

If you don’t have the cloud talent you need in-house, you’ll need to get in touch with a partner to map it all out and get everything ship-shape before launch. You won’t know the specifics until you bring that partner on board, so for the purposes of your business case, you’ll just need a snapshot of the journey from day one to implementation.

Next up: your KPIs. How will you measure your performance? What does success look like for your project? Whether you’re looking to offer customers a more streamlined experience, optimize your supply chain, improve hiring processes, or completely revolutionize the way your business works, you need to figure out the best way to check your progress and know when it’s time to pop the champagne bottles.

Project ownership
Last but not least, project ownership. This is where you outline who does what and when—who signs off on things, and who’s responsible for the overall success of the project?

Designate the right people for the job from across different departments, and provide a rough idea of where they’ll need to get involved in your timeline.

Cloud Security Comparison — AWS vs Azure

 

Cloud Security Comparison — AWS vs Azure


Within any deployment and across any and all cloud providers — Security is job zero! This means that above all else security comes first. Whether it is AWS Security vs Azure Security, or comparing any other cloud providers to an on-premises equivalent — security minded solutions will always perform better, be safer and be much more reliable than solutions that are not built with security in mind.

Whether or not you are a burgeoning startup, or a conglomerate — enabling security at every level of your organization is incredibly important.

When you are building your infrastructure, you need to have security constantly at the forefront of your mind. The fact that within the cloud certain undifferentiated heavy lifting is handled on your behalf will enable yourself and your team to be much more security focused than they otherwise would be. Alongside this general ability to focus more on security, all of the major cloud providers come with a myriad of different services designed to aid you in building the most reliable and secure workloads you possibly can.

In this blog post, we will talk about AWS security vs Azure security within the cloud. We will discuss different categories of cloud based security and break down AWS vs Azure security and do a detailed AWS vs Azure security comparison and do our best to conclude which cloud is superior.

Identity and Access Management

First, we will talk about how each cloud implements Identity and Access Management.

An integral part of cloud security is Identity and Access Management (IAM). In order to prevent unauthorized access to data and applications, organizations need to manage access and role permissions. There is a slight difference between the IAM frameworks used by AWS and Azure. For a holistic approach to cloud security, these differences are worth exploring.

Azure Active Directory

Microsoft Azure’s access and authorization services are based on Active Directory (AD).

When you subscribe to Microsoft’s commercial online services like Azure, Power Platform, Dynamics 365, and Intune, you automatically get basic Azure AD features. Furthermore, the free tier offers cloud authentication, unlimited single sign-on (SSO), multi-factor authentication (MFA), and role-based access control (RBAC).

In order to implement more advanced IAM features like secure mobile access, security reporting and enhanced monitoring, you’ll need to pay a premium. Azure AD provides Premium P1 and Premium P2 paid tiers for $6 and $9, respectively every month. This is rather exclusionary for any users who are looking to take advantage of free resources within the cloud, and AWS makes it easier for users to enable a strong security posture with low to no cost.

AWS Identity and Access Management (AWS IAM)

If you are an AWS customer, Amazon Web Service IAM is free. IAM would be considered a foundational feature by most people, so this structure makes sense. As well as fine-grained access controls, the feature supports logical organization, groups, roles, multi-factor authentication, real-time access monitoring, and JSON policy configuration.

Additionally, Amazon’s IAM comes with excellent security measures by default. Users must be assigned permissions manually by administrators, for example. Due to this, newly created users cannot take any action in AWS until they have received approval.

AWS also natively integrates with effectively every AWS service — to allow effective and safe collaboration between principles, services and different aspects of your AWS architecture.

Key Management and Encryption

Secondly, we will talk about how each cloud handles Key Management and Encryption within the main object storage services of each cloud: Amazon S3 and Azure Blob Storage.

Amazon S3

Data protection as it travels to and from Amazon S3 and at rest (while it is stored on disks in Amazon S3 data centers) is easy to implement when you are navigating the AWS cloud and specifically, Amazon S3.

You can protect data in transit using Secure Socket Layer/Transport Layer Security (SSL/TLS) or client-side encryption.

You have the following options for protecting data at rest in Amazon S3:

  • Server-Side Encryption — Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.
  • Client-Side Encryption — Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

The AWS Key Management service also offers fully managed key services with SSE-KMS and SSE-S3 server-side encryption. All key management functions are handled by AWS without user intervention.

Within Azure Blob, things are carried out effectively the same:

Azure Blob Storage

Azure Blob Storage also offers server-side and client-side encryption using AES-256 symmetric keys. In addition, just like AWS, Azure offers managed key storage and management.

Overall, AWS does have slightly more encryption services and options — however whether you need this additional functionality or not is related to your use case and perhaps your specific industry.

Data Center Security

Thirdly, we will discuss data center security of AWS vs Azure.

Azure Data Center Security

The access to data centers is tightly controlled by outer and inner perimeters with progressively more advanced security measures at each level, including perimeter fencing, security officers, locked server racks, integrated alarm systems, around-the-clock video surveillance by the operations center, and multi-factor access control. It is only authorized personnel who have access to Microsoft’s data centers. There is a restriction on logical access to the Microsoft 365 infrastructure, including the customer data, in Microsoft datacenters.

In order to monitor data center sites and facilities, our Security Operations Centers use integrated electronic access control systems. Camera systems provide effective coverage of the facility perimeter, entryways, shipping bays, server cages, interior aisles, and other sensitive security points. Security personnel receive alerts from the integrated security systems whenever an unauthorized entry attempt is detected as part of a multi-layered security posture.

There is an internationally recognized standard for checking the compliance, security and reliability of data centers, known as the Tiering System by the Uptime Institute.

There is no official tier certification for Microsoft. Uptime doesn’t have Microsoft listed in their certified tier database — but this doesn’t make Azure necessarily any better or worse.

AWS Data Center Security

AWS provides physical data center access only to approved employees. All AWS employees who need data center access must first apply for access and provide a valid business justification in order to gain access. These requests are granted based on the principle of least privilege, where requests must specify to which layer of the data center the individual needs access, and are time-bound. Requests are reviewed and approved by authorized personnel, and access is revoked after the requested time expires. Once granted admittance, individuals are restricted to areas specified in their permissions.

A Closed Circuit Television Camera (CCTV) records physical access points to server rooms. Retention of images is governed by legal and compliance requirements.

Ingress points to buildings are controlled by professional security staff using surveillance, detection systems, and other electronic means. Access to data centers is controlled by authorized staff using multi-factor authentication mechanisms. Entrances to server rooms are secured with devices that sound alarms to initiate an incident response if the door is forced or held open and there are also inbuilt intrusion detection methods. AWS is also not certified by the Uptime institute, and whilst this doesn’t necessarily matter either — it is important to note.

Cloud Monitoring

Finally, we will talk about how each cloud provider handles cloud monitoring.

Amazon CloudWatch

CloudWatch is AWS’s primary monitoring tool. In one place, your systems and applications’ operational and performance data are neatly consolidated.

Visibility is the cornerstone of CloudWatch’s dashboard. Custom dashboards can be created to monitor specific groups of applications. You will also be able to get a quick overview of your critical infrastructure through its visual tools, such as graphs and metrics.

In addition, the platform combines user-defined thresholds with machine learning models to identify unusual behavior. When abnormal behavior is detected, CloudWatch Alarms alert administrators. When an alarm is triggered, the platform also supports automated responses, such as shutting down unused instances.

In addition to operational tasks, such as capacity and resource planning, this automation extends to administrative tasks as well. Using metrics such as CPU usage, CloudWatch can automatically scale performance.

Azure Monitor

The Azure Monitor service is Azure’s native monitoring tool. As with AWS CloudWatch, it aggregates performance and availability data across the entire Azure ecosystem. The visibility includes both on-premises and cloud environments. CloudWatch’s dashboard appears more cluttered than Azure Monitor’s.

Azure, however, makes things easier by categorizing data into metrics or logs. Finding the relevant data requires a slight learning curve. To detect issues quickly, you may rely on metrics data. When you need to consolidate all the data collected from different sources, you will typically refer to log data.

The Azure platform also offers some automation features, such as auto-scaling resources and security alerts. Azure, however, focuses more on metrics set by users.

Finally, CloudWatch is more focused on improving incident response and reducing the time to resolution. Many users would argue that this is the basis of having a monitoring tool in the first place.

Overall, whilst AWS and Azure do both have very intuitively designed, secure and powerful tools with which to host your architecture, AWS seems to be slightly more user friendly, extensive and resourceful when it comes to making sure your security posture is up to scratch.

Thursday, September 15, 2022

Find WiFi password on Windows 10 with Command Prompt

 On Windows 10, you can find the WiFi password of the current connection or saved networks. The ability to determine this information can come in handy, for instance, if you are trying to help someone with a laptop join the same wireless network or remember it for future reference.

While the Settings app does not offer a way to view this information, you can use Control Panel to find the WiFi password of the current connection and Command Prompt (or PowerShell) to view the current and saved network passwords you connected in the past. (See also this video tutorial with the instructions to complete this task.)

In this guide, you will learn the steps to quickly find a WiFi password on Windows 10 using Control Panel and Command Prompt.

--------------------------

Using Control Panel, you can only view the WiFi password for the network you’re currently connected to. If you want to see your current password or saved WiFi networks stored on Windows 10, you’ll need to use Command Prompt. These steps will also work on PowerShell.

To see the WiFi passwords from saved networks on Windows 10, use these steps:

Open Start.


Search for Command Prompt, right-click the result, and select the Run as an Administrator option.


Type the following command to view a list of the WiFi networks your computer connected at one point in time and press Enter:  netsh wlan show profiles


Type the following command to determine the WiFi password for a particular network and press Enter: netsh wlan show profile name="WiFi-Profile" key=clear





                                                                                                



The password will the displayed in the Key Content field under “Security settings.” Remember to change WiFi-Profile for the name of your current or saved network you want to see the password in the command.

Sunday, September 11, 2022

A Cloud Migration Questionnaire for Solution Architects

 

A Cloud Migration Questionnaire for Solution Architects



The questions you must ask your customers before migrating their on-premise workload to AWS Cloud

Cloud Migration Questionnaire. Image by author

Context

Many companies operating from their own data centers started migrating their applications to the cloud, and it has become an obvious choice for many startups to create cloud-native applications. This is most important because of the speed of time to market and cost-efficiency in addition to many other benefits of the cloud.

As a solution architect, you need to ask relevant questions to gather the required information from customers. The solution you build based on this information from the customer lays the foundation for future design solutions and migrations.

Scope

This article covers questions (and the reasons behind them) that you must ask your customers so it makes sense why those questions are important to ask before planning migration to the cloud.

I have tried to map customer requirements in response to questions asked with major AWS services that can be used while migrating to the AWS cloud.

Questions You Must Ask Your Customers

This list is not exhaustive, but it is generic enough to be applied to any public cloud migration.

Note: You would have thought of many answers and available cloud solutions for the migration by going through all these questions. If not, please brainstorm possible answers before reading further so it will make a lot of sense, and you will be able to relate it with your solutions.

Why do you want to migrate to the cloud?

Possible Answers

  • Latency or performance issues in on-premise setup.
  • Issues with aging hardware or license expiry or data center exit.
  • Requirement of managed services which is difficult to set up on on-premises.
  • Need for setting up high availability applications.

Reasoning and Solutions

By asking these types of relevant questions, you understand their exact needs, and based on it you can offer different solutions.

  • Design VPC and talk to network people for their networking requirements.
  • Design Multi-AZ solutions for their high availability requirements.
  • Offer different managed services like SNS, SQS, RDS, etc.
  • Deploy applications to regions closer to the users to reduce latency.
  • Offer on-premise to cloud connectivity solutions like Site-to-Site VPN, AWS Direct.

How many code changes can you afford as part of migration?

Possible Answers

  • No Code Changes
  • Minor Configuration Changes
  • Redevelopment

Reasoning and Solutions

This question helps us to identify the efforts, time, and cost involved in migration.

I have mentioned different migration strategies in the order of their complexity. It means time and cost will increase proportionally but it will give better flexibility and opportunity for optimization.

Each migration strategy is mapped with the customer requirements and AWS services that can be used to address it.

Six R’s as Migration Strategies

  1. Retire — Unprovision legacy systems that don’t require much.
  2. Retain — You can retain some of the services on-premise due to legal/compliance issues.
  3. Rehost (Lift and Shift) — No Code Changes — Use AWS EC2, Elastic Beanstalk
  4. Repurchase (Drop and Shop) — Drop old services and repurchase licenses for third-party services.
  5. Replatform (Lift, Tinker, and Shift) — Minor Configuration Changes — Offer services like RDS, Elasticache, etc.
  6. Refactor/Rearchitect — Redevelopment — Develop cloud-native applications using SQS, SNS, SES, S3, Aurora, DynamoDB, etc.

What type of database are you using?

This question helps you understand the features of specific databases used by customers and compare them with cloud-managed services like RDS, Aurora for Relational DB, and Elasticache and DynamoDB for NoSQL.

  • There are some feature parity mismatches for MsSQL and Oracle with RDS that do not allow you to use managed DB services.
  • If you require access to the underlying DB host, then AWS managed services will not work.

So in these scenarios, you can install the required DB on the EC2 instance; otherwise, you can directly use RDS for migrating your database workload.

In addition, you can use many other AWS services for Data Migration like AWS DMS, AWS Snowball, AWS Snowmobile, etc.

What type of load balancers are you using?

Possible Answers

  • Hardware load balancers
  • Software load balancers like HaProxy, Nginx

Reasoning and Solutions

This helps to understand the type of workload customers are running and the performance requirements for load balancers.

Most web applications running on-premise use hardware load balancers operating at L7 for more flexibility and rich features.

You can offer equivalent AWS services for their requirement.

  • AWS ALB — Application load balancer operates at L7. Best suited for web applications routing traffic at the application level.
  • AWS NLB — Network load balancer operates at L4. Best suited for real-time high-performance applications.

What application servers and versions are you using?

Again this question helps to understand what application servers and versions you are using on-premise. How compatible is its availability on the cloud?

You can use AWS Beanstalk to deploy Java, Python, Ruby, Nodejs applications, but it may be possible that the version you are using on-premise may not be available. Or, it may be that the version you are using is pretty old, so it is not supported on Beanstalk. Before deciding on anything, do a proper assessment.

What operating system are you using?

You need to know if your application is too old to work on the latest operating system. If you have deployed your application to the latest operating system, then there are more chances that it will work on the cloud.

Most operating systems on the cloud include licensing costs, but there are options where you can bring on your existing on-premise license to the cloud.

There are many free and paid AMIs available to suit your operating system needs from AWS and its partners.

Is your application public facing?

This question helps you brainstorms answers for different solutions that may need DNS resolution, caching, latency, authentication, and security.

It may not be a big problem if it’s an internal application, as you can deploy it in a private subnet, which automatically blocks outside traffic.

It’s important to understand how DNS and CDN are used currently, and which firewall and other security services are being used on-premises to keep malicious traffic out to avoid major DDOS attacks.

Always try to use managed services for public-facing web applications, as there are fewer chances for going it down.

  • Route53 — high-performance managed service for public DNS queries to your web applications.
  • CloudFront — Low latency and high-performance CDN network for your static resources which reduces overall latency for public-facing applications. You can deploy CloudFront in the regions closer to customers if your customers are spread across regions.
  • Cognito — Let’s you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily using IAM. It scales to millions of users and supports federated identities for all major identity providers.
  • WAF — AWS web application firewall. Can be deployed at load balancer, CloudFront, and API Gateway.
  • API Gateway — If you are distributing your APIs to your partners and want to have quick measures in place using API keys, rate limiting, and usage plans.

Is your application stateful or stateless?

It brings out many anti-patterns people are using — like storing session information on the physical machine where the application is running or enabling sticky sessions in load balancers.

You can offer RDS, DynamoDB, or Elasticache to store sessions externally. Doing this will make applications truly stateless, which is very important for scaling applications.

Is your application containerized?

“Containerized application requires less resources then VMs and starts up in fraction of seconds. Containers package applications into a small, lightweight execution environment which shares host operating systems. Containers help isolate different micro services running in the same host operating system.

Containerized applications can be deployed using orchestration platforms like Kubernetes which helps in container management, application deployment, scaling which has become standard for cloud application deployment.”

  • EC2, Elastic Beanstalk — To deploy non-containerized applications.
  • AWS ECS, AWS EKS — To deploy containerized applications.

What are the current resource requirements of the servers?

You can get the current resource requirement of workload on-premise, which you can then use to map to the cloud resources.

You can use memory- and CPU-optimized resources based on the nature of your application but there is no formula to calculate it correctly.

You need to iterate over multiple times using load test and monitoring performance to find out the correct resource requirement in the cloud.

  • AWS Application Discovery Service — It helps to collect and present configuration, usage and behavior data from your on-premise servers to map capacity on cloud.
  • AWS CloudWatch Metrics — Use it to monitor your application metrics as part of load tests.

How is your workload variation?

You need to ask how much traffic variation customers are observing and if there are any specific patterns available.

  • AWS ASG — Auto Scaling Group helps applications scale in/out dynamically based on workloads and pattern.
  • Elastic Beanstalk — Automatically provisions ASG as opposed to manual provisioning.
  • ECS/EKS — You can use containers/pod autoscaling features if your application is dockerized and orchestrated using Kubernetes.

You can configure different scaling behaviors like simple scaling, target and tracking scaling, and step scaling.

What are your logging and monitoring requirements?

You need to get answers about how the different types of logs like operating systems, DB, and application logs are getting stored on-premises. What is their retention period?

Application pushes metrics to monitoring pipeline and sometimes metrics are created out of logs and pushed to monitoring systems.

Possible Answers

  • Logs get stored on on-premise servers where the application is running.
  • They get rotated and archived in the same server or other backup servers.
  • Then, they get deleted after a configured retention period.
  • Retention period for metrics is more than a year to get historical data.

Reasoning and Solutions

Logs give much more insight about what is going on, so they are very helpful for application debugging, auditing, and tracing the issues.

Monitoring is an important aspect of an observability platform, which gives information about your application health and how it is performing to take corrective actions before it gets too late.

  • AWS CloudWatch Logs — Store all your logs in Cloudwatch logs, then you can redirect to different logging solutions like ELK, Splunk or S3.
  • AWS Athena — Load logs from S3 and analyze them.
  • CloudWatch Alarm — Create alarms based on search criteria on Cloudwatch logs.
  • AWS S3 — Set S3 lifecycle rules to archive logs in IA Tier or Glacier based on requirement.
  • AWS CloudTrail — Store auditing information and redirect to S3.
  • AWS X-Rays — Use them for tracing requests and responses in microservice based applications.
  • CloudWatch Agents — Applications can be instrumented to create custom metrics, or the metrics can be created out of Cloudwatch logs and pushed to monitoring applications like Wavefront or Prometheus.

What is your current backup strategy?

Possible Answers

  • Script creates backup of DB and stores it in backup servers.
  • Need to manually recover DB from backups in case of a disaster.
  • Backups are taken hourly, daily and weekly.
  • Retention period may vary based on type of data.

Reasoning and Solutions

Backup plays a very important role in disaster recovery, so you need to plan your backup strategy well in advance as it impacts your customers and business heavily.

Applications should not store anything on disk, and they should be stateless to have effective backup policies in place for the application and its data.

  • Enable automatic snapshots of RDS backup for a limited retention of 35 days.
  • Script to create manual snapshots with infinite retention periods.
  • Create a regular AMI out of application setup when it gets changed.
  • Use all-in-one service, AWS backup, to address all backup-related needs.

How do you build, package and deploy your application?

Jenkins is a standard pipeline for building and packaging applications to address CI needs. People may use different open source or custom tools for their CI/CD needs.

There are different deployment strategies used like blue-green, rolling, and canary deployment based on type of applications and environment where the application is being deployed.

You can offer AWS Codepipeline, Code Deploy, Code Build, Code Commit, or AWS ECR for all CI/CD related requirements. These integrate well with each other. There are many Jenkins plugins available if you want to use any AWS-native services from Jenkins.

What type of security services are you using?

You can get information about current security services being used on-premises — like firewalls and other customer third party tools.

There are many equivalent AWS services offered.

  • AWS Guard Duty — Monitors and analyzes all types of logs; identifies any malicious IP address or domains.
  • AWS Config — Continuously monitors AWS resource configuration and takes defined reactive actions on violations.
  • AWS Shield — DDOS protection service against malicious web traffic.
  • AWS WAF — Protects applications and API behind load balancer, CloudFront, API gateway to block access based on IP address, request origin, request header, and request body.

Where do you store application configuration details?

Storing an environment-specific configuration with the main application is an anti-pattern. There may be secret credentials which will be different for each environment, and these should be managed separately for the application’s deployment.

  • Application configuration can be stored in a private git repository.
  • AWS Parameter Store can be used to store configuration information or secret credentials.
  • If there is a requirement of rotating credentials, then you can use AWS Secret Manager for storing credentials.

How do you manage your infrastructure?

Possible Answers

  • Custom scripts to provision VMs on-premises.
  • Provisioning tools like Chef, Puppet or Ansible.

Reasoning and Solutions

It’s a pain for developers when they must set up applications on new servers or scale up applications during peak periods. Generally, Sysadmin used to manage infrastructure tasks like VM or DB provisioning. What if developers can address their infrastructure needs? That’s Infrastructure as Code (IaC).

  • AWS Cloudformation is the native tool available for provisioning infrastructure resources.
  • Terraform is another cloud-agnostic tool that can be used for infrastructure provisioning.

What are your RTO and RPO requirements?

This questions is last, but it is very important for disaster recovery.

RTO defines the maximum application downtime you can bear case of disaster. If defined RTO is 30 minutes, then the system should be recovered by 3:30 p.m. for a disaster that happened at 3 p.m.

RPO defines how much data loss (measured in time) you can bear in case of disaster. If the defined RPO is 10 minutes, then you should have all data available until 2:50 p.m. after recovery for a disaster that happened at 3 p.m.

You can offer the following different disaster recovery solutions based on the increasing order of cost in proportion to better RTO and RPO.

  • Backup and Recovery — Stores backup in S3 and recover from it.
  • Pilot Light — Keeps core components of the application running at low capacity.
  • Active-Passive — Keeps scaled down version of fully running application as standby.
  • Active-Active — Keeps a fully functional application taking traffic in both regions.

Customers may ask for the best RTO and RPO solution, which comes at different costs, so you should ask customers about costs associated with each of the solutions.

Conclusion

I have covered many important questions to gather the required information which you will need before planning any cloud migration. I hope it’s useful.

Thanks for reading!

Kubernetes Commands for Beginners

 This document provides a list of basic Kubernetes commands useful for beginners. These commands help in interacting with the cluster and ma...