Monday, September 26, 2022
How to build a business case for cloud migration
How to build a business case for cloud migration
Cloud Security Comparison — AWS vs Azure
Cloud Security Comparison — AWS vs Azure
Within any deployment and across any and all cloud providers — Security is job zero! This means that above all else security comes first. Whether it is AWS Security vs Azure Security, or comparing any other cloud providers to an on-premises equivalent — security minded solutions will always perform better, be safer and be much more reliable than solutions that are not built with security in mind.
Whether or not you are a burgeoning startup, or a conglomerate — enabling security at every level of your organization is incredibly important.
When you are building your infrastructure, you need to have security constantly at the forefront of your mind. The fact that within the cloud certain undifferentiated heavy lifting is handled on your behalf will enable yourself and your team to be much more security focused than they otherwise would be. Alongside this general ability to focus more on security, all of the major cloud providers come with a myriad of different services designed to aid you in building the most reliable and secure workloads you possibly can.
In this blog post, we will talk about AWS security vs Azure security within the cloud. We will discuss different categories of cloud based security and break down AWS vs Azure security and do a detailed AWS vs Azure security comparison and do our best to conclude which cloud is superior.
Identity and Access Management
First, we will talk about how each cloud implements Identity and Access Management.
An integral part of cloud security is Identity and Access Management (IAM). In order to prevent unauthorized access to data and applications, organizations need to manage access and role permissions. There is a slight difference between the IAM frameworks used by AWS and Azure. For a holistic approach to cloud security, these differences are worth exploring.
Azure Active Directory
Microsoft Azure’s access and authorization services are based on Active Directory (AD).
When you subscribe to Microsoft’s commercial online services like Azure, Power Platform, Dynamics 365, and Intune, you automatically get basic Azure AD features. Furthermore, the free tier offers cloud authentication, unlimited single sign-on (SSO), multi-factor authentication (MFA), and role-based access control (RBAC).
In order to implement more advanced IAM features like secure mobile access, security reporting and enhanced monitoring, you’ll need to pay a premium. Azure AD provides Premium P1 and Premium P2 paid tiers for $6 and $9, respectively every month. This is rather exclusionary for any users who are looking to take advantage of free resources within the cloud, and AWS makes it easier for users to enable a strong security posture with low to no cost.
AWS Identity and Access Management (AWS IAM)
If you are an AWS customer, Amazon Web Service IAM is free. IAM would be considered a foundational feature by most people, so this structure makes sense. As well as fine-grained access controls, the feature supports logical organization, groups, roles, multi-factor authentication, real-time access monitoring, and JSON policy configuration.
Additionally, Amazon’s IAM comes with excellent security measures by default. Users must be assigned permissions manually by administrators, for example. Due to this, newly created users cannot take any action in AWS until they have received approval.
AWS also natively integrates with effectively every AWS service — to allow effective and safe collaboration between principles, services and different aspects of your AWS architecture.
Key Management and Encryption
Secondly, we will talk about how each cloud handles Key Management and Encryption within the main object storage services of each cloud: Amazon S3 and Azure Blob Storage.
Amazon S3
Data protection as it travels to and from Amazon S3 and at rest (while it is stored on disks in Amazon S3 data centers) is easy to implement when you are navigating the AWS cloud and specifically, Amazon S3.
You can protect data in transit using Secure Socket Layer/Transport Layer Security (SSL/TLS) or client-side encryption.
You have the following options for protecting data at rest in Amazon S3:
- Server-Side Encryption — Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.
- Client-Side Encryption — Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.
The AWS Key Management service also offers fully managed key services with SSE-KMS and SSE-S3 server-side encryption. All key management functions are handled by AWS without user intervention.
Within Azure Blob, things are carried out effectively the same:
Azure Blob Storage
Azure Blob Storage also offers server-side and client-side encryption using AES-256 symmetric keys. In addition, just like AWS, Azure offers managed key storage and management.
Overall, AWS does have slightly more encryption services and options — however whether you need this additional functionality or not is related to your use case and perhaps your specific industry.
Data Center Security
Thirdly, we will discuss data center security of AWS vs Azure.
Azure Data Center Security
The access to data centers is tightly controlled by outer and inner perimeters with progressively more advanced security measures at each level, including perimeter fencing, security officers, locked server racks, integrated alarm systems, around-the-clock video surveillance by the operations center, and multi-factor access control. It is only authorized personnel who have access to Microsoft’s data centers. There is a restriction on logical access to the Microsoft 365 infrastructure, including the customer data, in Microsoft datacenters.
In order to monitor data center sites and facilities, our Security Operations Centers use integrated electronic access control systems. Camera systems provide effective coverage of the facility perimeter, entryways, shipping bays, server cages, interior aisles, and other sensitive security points. Security personnel receive alerts from the integrated security systems whenever an unauthorized entry attempt is detected as part of a multi-layered security posture.
There is an internationally recognized standard for checking the compliance, security and reliability of data centers, known as the Tiering System by the Uptime Institute.
There is no official tier certification for Microsoft. Uptime doesn’t have Microsoft listed in their certified tier database — but this doesn’t make Azure necessarily any better or worse.
AWS Data Center Security
AWS provides physical data center access only to approved employees. All AWS employees who need data center access must first apply for access and provide a valid business justification in order to gain access. These requests are granted based on the principle of least privilege, where requests must specify to which layer of the data center the individual needs access, and are time-bound. Requests are reviewed and approved by authorized personnel, and access is revoked after the requested time expires. Once granted admittance, individuals are restricted to areas specified in their permissions.
A Closed Circuit Television Camera (CCTV) records physical access points to server rooms. Retention of images is governed by legal and compliance requirements.
Ingress points to buildings are controlled by professional security staff using surveillance, detection systems, and other electronic means. Access to data centers is controlled by authorized staff using multi-factor authentication mechanisms. Entrances to server rooms are secured with devices that sound alarms to initiate an incident response if the door is forced or held open and there are also inbuilt intrusion detection methods. AWS is also not certified by the Uptime institute, and whilst this doesn’t necessarily matter either — it is important to note.
Cloud Monitoring
Finally, we will talk about how each cloud provider handles cloud monitoring.
Amazon CloudWatch
CloudWatch is AWS’s primary monitoring tool. In one place, your systems and applications’ operational and performance data are neatly consolidated.
Visibility is the cornerstone of CloudWatch’s dashboard. Custom dashboards can be created to monitor specific groups of applications. You will also be able to get a quick overview of your critical infrastructure through its visual tools, such as graphs and metrics.
In addition, the platform combines user-defined thresholds with machine learning models to identify unusual behavior. When abnormal behavior is detected, CloudWatch Alarms alert administrators. When an alarm is triggered, the platform also supports automated responses, such as shutting down unused instances.
In addition to operational tasks, such as capacity and resource planning, this automation extends to administrative tasks as well. Using metrics such as CPU usage, CloudWatch can automatically scale performance.
Azure Monitor
The Azure Monitor service is Azure’s native monitoring tool. As with AWS CloudWatch, it aggregates performance and availability data across the entire Azure ecosystem. The visibility includes both on-premises and cloud environments. CloudWatch’s dashboard appears more cluttered than Azure Monitor’s.
Azure, however, makes things easier by categorizing data into metrics or logs. Finding the relevant data requires a slight learning curve. To detect issues quickly, you may rely on metrics data. When you need to consolidate all the data collected from different sources, you will typically refer to log data.
The Azure platform also offers some automation features, such as auto-scaling resources and security alerts. Azure, however, focuses more on metrics set by users.
Finally, CloudWatch is more focused on improving incident response and reducing the time to resolution. Many users would argue that this is the basis of having a monitoring tool in the first place.
Overall, whilst AWS and Azure do both have very intuitively designed, secure and powerful tools with which to host your architecture, AWS seems to be slightly more user friendly, extensive and resourceful when it comes to making sure your security posture is up to scratch.
Thursday, September 15, 2022
Find WiFi password on Windows 10 with Command Prompt
On Windows 10, you can find the WiFi password of the current connection or saved networks. The ability to determine this information can come in handy, for instance, if you are trying to help someone with a laptop join the same wireless network or remember it for future reference.
While the Settings app does not offer a way to view this information, you can use Control Panel to find the WiFi password of the current connection and Command Prompt (or PowerShell) to view the current and saved network passwords you connected in the past. (See also this video tutorial with the instructions to complete this task.)
In this guide, you will learn the steps to quickly find a WiFi password on Windows 10 using Control Panel and Command Prompt.
--------------------------
Using Control Panel, you can only view the WiFi password for the network you’re currently connected to. If you want to see your current password or saved WiFi networks stored on Windows 10, you’ll need to use Command Prompt. These steps will also work on PowerShell.
To see the WiFi passwords from saved networks on Windows 10, use these steps:
Open Start.
Search for Command Prompt, right-click the result, and select the Run as an Administrator option.
Type the following command to view a list of the WiFi networks your computer connected at one point in time and press Enter: netsh wlan show profiles
Type the following command to determine the WiFi password for a particular network and press Enter: netsh wlan show profile name="WiFi-Profile" key=clear
Sunday, September 11, 2022
A Cloud Migration Questionnaire for Solution Architects
A Cloud Migration Questionnaire for Solution Architects
The questions you must ask your customers before migrating their on-premise workload to AWS Cloud

Context
Many companies operating from their own data centers started migrating their applications to the cloud, and it has become an obvious choice for many startups to create cloud-native applications. This is most important because of the speed of time to market and cost-efficiency in addition to many other benefits of the cloud.
As a solution architect, you need to ask relevant questions to gather the required information from customers. The solution you build based on this information from the customer lays the foundation for future design solutions and migrations.
Scope
This article covers questions (and the reasons behind them) that you must ask your customers so it makes sense why those questions are important to ask before planning migration to the cloud.
I have tried to map customer requirements in response to questions asked with major AWS services that can be used while migrating to the AWS cloud.
Questions You Must Ask Your Customers
This list is not exhaustive, but it is generic enough to be applied to any public cloud migration.
- Why do you want to migrate to the cloud?
- How many code changes can you afford as part of migration?
- What type of database are you using?
- What type of load balancers are you using?
- What application servers and versions are you using?
- What operating system are you using?
- Is your application public facing?
- Is your application stateful or stateless?
- Is your application containerized?
- What are the current resource requirements of the servers?
- How is your workload variation?
- What are your logging and monitoring requirements?
- What is your current backup strategy?
- How do you build, package and deploy your application?
- What type of security services are you using?
- Where do you store application configuration details?
- How do you manage your infrastructure?
- What are your RTO and RPO requirements?
Note: You would have thought of many answers and available cloud solutions for the migration by going through all these questions. If not, please brainstorm possible answers before reading further so it will make a lot of sense, and you will be able to relate it with your solutions.
Why do you want to migrate to the cloud?
Possible Answers
- Latency or performance issues in on-premise setup.
- Issues with aging hardware or license expiry or data center exit.
- Requirement of managed services which is difficult to set up on on-premises.
- Need for setting up high availability applications.
Reasoning and Solutions
By asking these types of relevant questions, you understand their exact needs, and based on it you can offer different solutions.
- Design VPC and talk to network people for their networking requirements.
- Design Multi-AZ solutions for their high availability requirements.
- Offer different managed services like SNS, SQS, RDS, etc.
- Deploy applications to regions closer to the users to reduce latency.
- Offer on-premise to cloud connectivity solutions like Site-to-Site VPN, AWS Direct.
How many code changes can you afford as part of migration?
Possible Answers
- No Code Changes
- Minor Configuration Changes
- Redevelopment
Reasoning and Solutions
This question helps us to identify the efforts, time, and cost involved in migration.
I have mentioned different migration strategies in the order of their complexity. It means time and cost will increase proportionally but it will give better flexibility and opportunity for optimization.
Each migration strategy is mapped with the customer requirements and AWS services that can be used to address it.
Six R’s as Migration Strategies
- Retire — Unprovision legacy systems that don’t require much.
- Retain — You can retain some of the services on-premise due to legal/compliance issues.
- Rehost (Lift and Shift) — No Code Changes — Use AWS EC2, Elastic Beanstalk
- Repurchase (Drop and Shop) — Drop old services and repurchase licenses for third-party services.
- Replatform (Lift, Tinker, and Shift) — Minor Configuration Changes — Offer services like RDS, Elasticache, etc.
- Refactor/Rearchitect — Redevelopment — Develop cloud-native applications using SQS, SNS, SES, S3, Aurora, DynamoDB, etc.
What type of database are you using?
This question helps you understand the features of specific databases used by customers and compare them with cloud-managed services like RDS, Aurora for Relational DB, and Elasticache and DynamoDB for NoSQL.
- There are some feature parity mismatches for MsSQL and Oracle with RDS that do not allow you to use managed DB services.
- If you require access to the underlying DB host, then AWS managed services will not work.
So in these scenarios, you can install the required DB on the EC2 instance; otherwise, you can directly use RDS for migrating your database workload.
In addition, you can use many other AWS services for Data Migration like AWS DMS, AWS Snowball, AWS Snowmobile, etc.
What type of load balancers are you using?
Possible Answers
- Hardware load balancers
- Software load balancers like HaProxy, Nginx
Reasoning and Solutions
This helps to understand the type of workload customers are running and the performance requirements for load balancers.
Most web applications running on-premise use hardware load balancers operating at L7 for more flexibility and rich features.
You can offer equivalent AWS services for their requirement.
- AWS ALB — Application load balancer operates at L7. Best suited for web applications routing traffic at the application level.
- AWS NLB — Network load balancer operates at L4. Best suited for real-time high-performance applications.
What application servers and versions are you using?
Again this question helps to understand what application servers and versions you are using on-premise. How compatible is its availability on the cloud?
You can use AWS Beanstalk to deploy Java, Python, Ruby, Nodejs applications, but it may be possible that the version you are using on-premise may not be available. Or, it may be that the version you are using is pretty old, so it is not supported on Beanstalk. Before deciding on anything, do a proper assessment.
What operating system are you using?
You need to know if your application is too old to work on the latest operating system. If you have deployed your application to the latest operating system, then there are more chances that it will work on the cloud.
Most operating systems on the cloud include licensing costs, but there are options where you can bring on your existing on-premise license to the cloud.
There are many free and paid AMIs available to suit your operating system needs from AWS and its partners.
Is your application public facing?
This question helps you brainstorms answers for different solutions that may need DNS resolution, caching, latency, authentication, and security.
It may not be a big problem if it’s an internal application, as you can deploy it in a private subnet, which automatically blocks outside traffic.
It’s important to understand how DNS and CDN are used currently, and which firewall and other security services are being used on-premises to keep malicious traffic out to avoid major DDOS attacks.
Always try to use managed services for public-facing web applications, as there are fewer chances for going it down.
- Route53 — high-performance managed service for public DNS queries to your web applications.
- CloudFront — Low latency and high-performance CDN network for your static resources which reduces overall latency for public-facing applications. You can deploy CloudFront in the regions closer to customers if your customers are spread across regions.
- Cognito — Let’s you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily using IAM. It scales to millions of users and supports federated identities for all major identity providers.
- WAF — AWS web application firewall. Can be deployed at load balancer, CloudFront, and API Gateway.
- API Gateway — If you are distributing your APIs to your partners and want to have quick measures in place using API keys, rate limiting, and usage plans.
Is your application stateful or stateless?
It brings out many anti-patterns people are using — like storing session information on the physical machine where the application is running or enabling sticky sessions in load balancers.
You can offer RDS, DynamoDB, or Elasticache to store sessions externally. Doing this will make applications truly stateless, which is very important for scaling applications.
Is your application containerized?
“Containerized application requires less resources then VMs and starts up in fraction of seconds. Containers package applications into a small, lightweight execution environment which shares host operating systems. Containers help isolate different micro services running in the same host operating system.
Containerized applications can be deployed using orchestration platforms like Kubernetes which helps in container management, application deployment, scaling which has become standard for cloud application deployment.”
- EC2, Elastic Beanstalk — To deploy non-containerized applications.
- AWS ECS, AWS EKS — To deploy containerized applications.
What are the current resource requirements of the servers?
You can get the current resource requirement of workload on-premise, which you can then use to map to the cloud resources.
You can use memory- and CPU-optimized resources based on the nature of your application but there is no formula to calculate it correctly.
You need to iterate over multiple times using load test and monitoring performance to find out the correct resource requirement in the cloud.
- AWS Application Discovery Service — It helps to collect and present configuration, usage and behavior data from your on-premise servers to map capacity on cloud.
- AWS CloudWatch Metrics — Use it to monitor your application metrics as part of load tests.
How is your workload variation?
You need to ask how much traffic variation customers are observing and if there are any specific patterns available.
- AWS ASG — Auto Scaling Group helps applications scale in/out dynamically based on workloads and pattern.
- Elastic Beanstalk — Automatically provisions ASG as opposed to manual provisioning.
- ECS/EKS — You can use containers/pod autoscaling features if your application is dockerized and orchestrated using Kubernetes.
You can configure different scaling behaviors like simple scaling, target and tracking scaling, and step scaling.
What are your logging and monitoring requirements?
You need to get answers about how the different types of logs like operating systems, DB, and application logs are getting stored on-premises. What is their retention period?
Application pushes metrics to monitoring pipeline and sometimes metrics are created out of logs and pushed to monitoring systems.
Possible Answers
- Logs get stored on on-premise servers where the application is running.
- They get rotated and archived in the same server or other backup servers.
- Then, they get deleted after a configured retention period.
- Retention period for metrics is more than a year to get historical data.
Reasoning and Solutions
Logs give much more insight about what is going on, so they are very helpful for application debugging, auditing, and tracing the issues.
Monitoring is an important aspect of an observability platform, which gives information about your application health and how it is performing to take corrective actions before it gets too late.
- AWS CloudWatch Logs — Store all your logs in Cloudwatch logs, then you can redirect to different logging solutions like ELK, Splunk or S3.
- AWS Athena — Load logs from S3 and analyze them.
- CloudWatch Alarm — Create alarms based on search criteria on Cloudwatch logs.
- AWS S3 — Set S3 lifecycle rules to archive logs in IA Tier or Glacier based on requirement.
- AWS CloudTrail — Store auditing information and redirect to S3.
- AWS X-Rays — Use them for tracing requests and responses in microservice based applications.
- CloudWatch Agents — Applications can be instrumented to create custom metrics, or the metrics can be created out of Cloudwatch logs and pushed to monitoring applications like Wavefront or Prometheus.
What is your current backup strategy?
Possible Answers
- Script creates backup of DB and stores it in backup servers.
- Need to manually recover DB from backups in case of a disaster.
- Backups are taken hourly, daily and weekly.
- Retention period may vary based on type of data.
Reasoning and Solutions
Backup plays a very important role in disaster recovery, so you need to plan your backup strategy well in advance as it impacts your customers and business heavily.
Applications should not store anything on disk, and they should be stateless to have effective backup policies in place for the application and its data.
- Enable automatic snapshots of RDS backup for a limited retention of 35 days.
- Script to create manual snapshots with infinite retention periods.
- Create a regular AMI out of application setup when it gets changed.
- Use all-in-one service, AWS backup, to address all backup-related needs.
How do you build, package and deploy your application?
Jenkins is a standard pipeline for building and packaging applications to address CI needs. People may use different open source or custom tools for their CI/CD needs.
There are different deployment strategies used like blue-green, rolling, and canary deployment based on type of applications and environment where the application is being deployed.
You can offer AWS Codepipeline, Code Deploy, Code Build, Code Commit, or AWS ECR for all CI/CD related requirements. These integrate well with each other. There are many Jenkins plugins available if you want to use any AWS-native services from Jenkins.
What type of security services are you using?
You can get information about current security services being used on-premises — like firewalls and other customer third party tools.
There are many equivalent AWS services offered.
- AWS Guard Duty — Monitors and analyzes all types of logs; identifies any malicious IP address or domains.
- AWS Config — Continuously monitors AWS resource configuration and takes defined reactive actions on violations.
- AWS Shield — DDOS protection service against malicious web traffic.
- AWS WAF — Protects applications and API behind load balancer, CloudFront, API gateway to block access based on IP address, request origin, request header, and request body.
Where do you store application configuration details?
Storing an environment-specific configuration with the main application is an anti-pattern. There may be secret credentials which will be different for each environment, and these should be managed separately for the application’s deployment.
- Application configuration can be stored in a private git repository.
- AWS Parameter Store can be used to store configuration information or secret credentials.
- If there is a requirement of rotating credentials, then you can use AWS Secret Manager for storing credentials.
How do you manage your infrastructure?
Possible Answers
- Custom scripts to provision VMs on-premises.
- Provisioning tools like Chef, Puppet or Ansible.
Reasoning and Solutions
It’s a pain for developers when they must set up applications on new servers or scale up applications during peak periods. Generally, Sysadmin used to manage infrastructure tasks like VM or DB provisioning. What if developers can address their infrastructure needs? That’s Infrastructure as Code (IaC).
- AWS Cloudformation is the native tool available for provisioning infrastructure resources.
- Terraform is another cloud-agnostic tool that can be used for infrastructure provisioning.
What are your RTO and RPO requirements?
This questions is last, but it is very important for disaster recovery.
RTO defines the maximum application downtime you can bear case of disaster. If defined RTO is 30 minutes, then the system should be recovered by 3:30 p.m. for a disaster that happened at 3 p.m.
RPO defines how much data loss (measured in time) you can bear in case of disaster. If the defined RPO is 10 minutes, then you should have all data available until 2:50 p.m. after recovery for a disaster that happened at 3 p.m.
You can offer the following different disaster recovery solutions based on the increasing order of cost in proportion to better RTO and RPO.
- Backup and Recovery — Stores backup in S3 and recover from it.
- Pilot Light — Keeps core components of the application running at low capacity.
- Active-Passive — Keeps scaled down version of fully running application as standby.
- Active-Active — Keeps a fully functional application taking traffic in both regions.
Customers may ask for the best RTO and RPO solution, which comes at different costs, so you should ask customers about costs associated with each of the solutions.
Conclusion
I have covered many important questions to gather the required information which you will need before planning any cloud migration. I hope it’s useful.
Thanks for reading!
Kubernetes Commands for Beginners
This document provides a list of basic Kubernetes commands useful for beginners. These commands help in interacting with the cluster and ma...
-
Installing Docker Introduction Your development team has asked for a cloud server with Docker installed. To meet this requir...
-
AWS Auto Scaling Load Balancer stickiness is a feature that allows the load balancer to associate a user's session with a specific inst...
-
Ref : https://dev.to/aws-builders/migration-evaluator-install-and-configuration-32in Migration Evaluator which is formerly known as TSO Lo...