Azure Update Manager & Azure ARC

Hello, readers! Today, we’re diving into Azure Update Manager, an essential tool for anyone looking to optimize and manage their cloud solutions. Have you heard of it, or are you already using it? We’re eager to hear your thoughts, questions, and experiences. Please leave your comments below, and let’s explore the potential of Azure Update Manager together!

1. What is Azure Update Manager?

  • An introduction to Azure Update Manager, explaining its primary role in managing updates for operating systems and applications on virtual machines.
    • What is Azure ARC?
      • Azure ARC is a Microsoft solution that allows you to manage resources from different cloud environments and on-premises locations directly from Azure.

2. Benefits of Azure Update Manager

  • Automation: How Update Manager automates the update process, reducing operational overhead.
  • Compliance: The importance of keeping systems updated to meet compliance and security requirements.
    • Azure ARC
      • One of the significant advantages of Azure ARC is the ability to manage updates from various cloud providers in a single dashboard. This means that regardless of where your applications and services are hosted—whether in Azure, AWS, Google Cloud, or on-premises servers—you can monitor and apply updates centrally.

3. Integration with Azure Automation

  • How Azure Update Manager integrates with Azure Automation to facilitate large-scale update management.

4. Configuration and Implementation

  • A step-by-step guide on how to set up Azure Update Manager in an organization, including best practices.

5. Reporting and Monitoring

  • The importance of detailed reporting and continuous monitoring of applied updates, and how this can be achieved with Azure Update Manager.

Centralization: With Azure ARC, I don’t need to switch between different management consoles. Everything is accessible in one place, making it easier to visualize and control. This helps me a lot in my daily tasks, especially with centralized patch scheduling and defining custom windows for more efficient deployment, all in a single console

By leveraging this tool, we have greatly minimized the impact during patch windows and enhanced our visibility into all ongoing activities. We are very pleased with the results! 🚀🚀

I hope this post has shed light on the benefits of Azure Update. Have you had any experiences with it? If you haven’t started using Azure ARC yet, it’s worth considering this solution to optimize your IT infrastructure management. I’d love to hear your thoughts and questions! Please leave your comments below, and let’s continue the conversation!😉

https://azure.microsoft.com/en-us/products/azure-update-management-center
https://azure.microsoft.com/pt-br/products/azure-arc

 Microsoft Copilot for Azure 🎉

Now many things can be done quickly and easily with the announcement of the Microsoft Copilot for Azure, an AI companion, that helps you design, operate, optimize, and troubleshoot your cloud infrastructure and services. Combining the power of cutting-edge large language models (LLMs) with the Azure Resource Model, Copilot for Azure enables rich understanding and management of everything that’s happening in Azure, from the cloud to the edge.

Microsoft Copilot for Azure helps you to complete complex tasks faster, quickly discover and use new capabilities, and instantly generate deep insights to scale improvements broadly across the team and organization.

  • Design: create and configure the services needed while aligning with organizational policies
  • Operate: answer questions, author complex commands, and manage resources
  • Troubleshoot: orchestrate across Azure services for insights to summarize issues, identify causes, and suggest solutions
  • Optimize: improve costs, scalability, and reliability through recommendations for your environment

Optimizing cost and performance

Copilot aids in understanding invoices, spending patterns, changes, and outliers, and it recommends cost optimizations.

https://techcommunity.microsoft.com/t5/azure-infrastructure-blog/simplify-it-management-with-microsoft-copilot-for-azure-save/ba-p/3981106

👏🎉Microsoft Copilot for Azure is already being used internally by Microsoft employees and with a small group of customers.🚀☁

#azure #cloud #microsoft #copilot

📢 Important Announcement: Windows Server 2012/R2 Reaches End of Support 📢

Attention, Windows Server users! We have an important update for you. Inform you that Windows Server 2012/R2 has reached its end of support October 10th, 2023.

What does this mean for you? It means that Microsoft will no longer provide security updates, bug fixes, or technical support for Windows Server 2012/R2. This leaves your system vulnerable to potential security risks and compatibility issues.

To ensure the stability and security of your infrastructure, we strongly recommend considering an upgrade to a supported version of Windows Server, such as Windows Server 2019 or the latest version available. Upgrading will not only provide you with enhanced security features but also access to the latest technology advancements.

Migrating to Azure  for free Extended Security Updates

https://azure.microsoft.com/pt-br/updates/windows-server-2012r2-reaches-end-of-support/

Remember, staying up-to-date with the latest technology is essential for maintaining a secure and efficient IT environment. Act now and protect your business from potential risks.

StaySecure #UpgradeYourServer #WindowsServer2023

Reduce Snapshot costs with new GCP Archive Snapshot

Due to Google Cloud Platform’s (GCP) announcement of new prices for disk snapshots, as of April 1, 2023. The changes affect regional and multi-regional snapshots.

  1. Regional snapshot storage: The price of regional snapshot storage will increase from $0.026 GB per month to $0.05 GB per month.
  2. Multi-regional snapshot storage: The price of multi-regional snapshot storage will increase from $0.026 GB per month to $0.065 GB per month.

The image shows the situation in many companies where the cost of snapshots has almost doubled and it is now really important to check and analyze your snapshot protection, especially if you have a large organization and thousands of PD snapshots.

  • As a suggestion to minimize costs, use Archive snapshots:
    • Use archive snapshots. Archive snapshots are a new type of snapshot that is designed to be more cost-effective than standard snapshots. Archive snapshots are stored in a compressed format and can only be accessed by creating a new disk from the snapshot. The price of archive snapshot storage is $0.019 per GB per month, which is significantly lower than the price of standard snapshot storage ($0.05 per GB per month).
    • But very important to remember about the “Early Deletion” cost – the minimum life sickle period for this type of snapshots is 90 days.
  • Important steps:
    • Retention policy and snapshot frequency:
      • Review your retention policy and snapshot frequency to comply with the policy. Reducing the frequency of snapshots or adjusting the retention period can help reduce costs. Only take snapshots as often as necessary and consider deleting older snapshots that are no longer needed.
    • Use regional (rather than multi-regional) snapshots:
      • Regional snapshots are generally cheaper than multi-regional snapshots. If your use case allows it, opt for regional snapshots to save on costs.
    • Optimizing snapshot size:
      • Try to minimize the amount of data stored in the snapshots by removing unnecessary files or temporary data. If you take several snapshots from one disk, the incrementally saved data and deleted files will be added to the active snapshots. Ensuring efficient disk usage can help reduce the size of snapshots and, consequently, lower costs.
    • Snapshot Schedule and Automation:
      • Automate the snapshot creation process using scheduling tools or scripts Python. This helps you manage snapshots efficiently and avoid unnecessary manual operations.
def create_snapshot(disk_name, snapshot_name):
    config = {
        "name": snapshot_name,
        "sourceDisk": disk_name,
        "storageLocations": "southamerica-east1",
        "labels": {"bkp": "archive"}
    }
    # parâmetro 'snapshot-type' definido como 'ARCHIVE' no corpo da solicitação
    config["snapshotType"] = "ARCHIVE"
    • Choose the Right Disk Type and Size + Analyze:
      • Choose the appropriate persistent disk type and size based on your application’s needs. Using the right disk specifications can prevent overprovisioning and reduce costs.
      • Monitor and analyze snapshot usage regularly to identify any unnecessary or redundant snapshots. Removing such snapshots can lead to cost reductions.

#GCP #CLOUD #GOOGLE

Azure Active Directory muda de nome para Microsoft Entra ID

Microsoft anunciou ontem, (11), que mudaria o nome de seu serviço de identidade corporativa do Azure Active Directory (Azure AD) para Microsoft Entra ID até o final do ano.

O Azure AD oferece uma variedade de recursos de segurança, incluindo logon único, autenticação multifator e acesso condicional, com a Microsoft dizendo que ajuda a se defender contra 99,9% dos ataques de segurança cibernética.

Embora os nomes de licença autônomos também estejam sendo modificados com essa mudança de marca, isso não afetará os recursos do serviço e tudo funcionará como antes da mudança de nome.

Os recursos e planos de licenciamento, URLs de entrada e APIs permanecem inalterados, e todas as implantações, configurações e integrações existentes continuarão a funcionar como antes. A partir de hoje, você verá notificações no portal do administrador, em nossos sites, na documentação e em outros locais onde você pode interagir com o Azure AD.
disse Joy Chik, presidente da Microsoft para Identidade e Acesso à Rede.

Novos serviços

a Microsoft também anunciou o lançamento de dois novos serviços, o Entra Internet Access e o Entra Private Access, em pré-visualização pública, projetados para fornecer acesso seguro a recursos corporativos.

O Entra Internet Access é usado para proteger serviços Web voltados para o público, permitindo que os administradores restrinjam os visitantes por meio do Acesso Condicional.

O Microsoft Entra Internet Access é um Secure Web Gateway centrado em identidade que protege o acesso a todos os aplicativos e recursos da Internet, SaaS e Microsoft 365“, disse Chik.

O Acesso Privado Entra é um serviço semelhante a uma VPN que permite o acesso remoto a recursos corporativos internos e privados.

O Microsoft Entra Private Access é um Zero Trust Network Access (ZTNA) centrado em identidade que protege o acesso a aplicativos e recursos privados“, disse Chik.

Você pode se inscrever para uma prévia dos próximos recursos do Entra Internet Access e do Entra Private Access hoje antes de seu lançamento previsto para o final deste ano.


https://learn.microsoft.com/pt-br/azure/active-directory/fundamentals/new-name

AWS SageMaker Canvas

Now anybody to build machine learning prediction models, using a point-and-click interface. Great!!!!!

AWS today announced a new machine learning service, Amazon SageMaker Canvas. Unlike its existing machine learning services, the target audience here isn’t highly technical data scientists and engineers but any engineer or business user inside a company. The promise of SageMaker Canvas is that it will allow anybody to build machine learning prediction models, using a point-and-click interface.

If that sounds familiar, it may be because Azure and others offer similar tools, though AWS may have the advantage that a lot of companies already store all of their data in AWS anyway.

University of Lincoln launches cloud computing degree with Microsoft

Dr Derek Foster, programme leader and associate professor in the School of Computer Science at the University of Lincoln, said: “The course combines the broad concepts of cloud computing, such as compute, storage and networking, with the opportunity to develop practical skills around cloud architectural design, deployment and development.”

More information is available on the one year full-time course here.

Public preview: Azure Route Server

What is Azure Route Server (Preview)?

Azure Route Server simplifies dynamic routing between your network virtual appliance (NVA) and your virtual network. It allows you to exchange routing information directly through Border Gateway Protocol (BGP) routing protocol between any NVA that supports the BGP routing protocol and the Azure Software Defined Network (SDN) in the Azure Virtual Network (VNET) without the need to manually configure or maintain route tables. Azure Route Server is a fully managed service and is configured with high availability.

Azure Route Server to enable dynamic routing between your network appliances and gateways in Azure, instead of using static routing. Azure Route Server provides Border Gateway Protocol (BGP) endpoints using standard routing protocol to exchange routes.

Easily use Azure Route Server with existing deployments or new deployments with support for any network topology, such as hub and spoke, full mesh, or flat virtual network).

After Enabling it publishes all routes to Express Route, making management easier.

#cloud #Azure

Google GCP – New Tau VMs

T2D, the first instance type in the Tau VM family, is based on 3rd Gen AMD EPYCTM processors and leapfrogs the VMs for scale-out workloads of any leading public cloud provider available today, both in terms of performance and workload total cost of ownership (TCO). The x86 compatibility provided by these AMD EPYC processor-based VMs gives you market-leading performance improvements and cost savings, without having to port your applications to a new processor architecture. 

As illustrated below, Tau VMs offer 56% higher absolute performance and 42% higher price-performance (est. SPECrate2017_int_base) compared to general-purpose VMs from any of the leading public cloud vendors.

https://cloud.google.com/blog/products/compute/google-cloud-introduces-tau-vms

Graviton Challenge

oday, Dave Brown, VP of Amazon EC2 at AWS, announced the Graviton Challenge as part of his session on AWS silicon innovation at the Six Five Summit 2021. We invite you to take the Graviton Challenge and move your applications to run on AWS Graviton2. The challenge, intended for individual developers and small teams, is based on the experiences of customers who’ve already migrated. It provides a framework of eight, approximately four-hour chunks to prepare, port, optimize, and finally deploy your application onto Graviton2 instances. Getting your application running on Graviton2, and enjoying the improved price performance, aren’t the only rewards. There are prizes and swag for those who complete the challenge!

AWS Graviton2 is a custom-built processor from AWS that’s based on the Arm64 architecture. It’s supported by popular Linux operating systems including Amazon Linux 2, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu. Compared to fifth-generation x86-based Amazon Elastic Compute Cloud (Amazon EC2) instance types, Graviton2 instance types have a 20% lower cost. Overall, customers who have moved applications to Graviton2 typically see up to 40% better price performance for a broad range of workloads including application servers, container-based applications, microservices, caching fleets, data analytics, video encoding, electronic design automation, gaming, open-source databases, and more.

Before I dive in to talk more about the challenge, check out the fun introductory video below from Jeff Barr, Chief Evangelist, AWS and Dave Brown, Vice President, EC2. As Jeff mentions in the video: same exact workload, same or better performance, and up to 40% better price performance!

  • Best adoption – enterprise
    Based on the performance gains, total cost savings, number of instances the workload is running on, and time taken to migrate the workload (faster is better), for companies with over 1000 employees. The winner will also receive a chance to present at the conference.
  • Best adoption – small/medium business
    Based on the performance gains, total cost savings, number of instances the workload is running on, and time taken to migrate the workload (faster is better), for companies with 100-1000 employees. The winner will also receive a chance to present at the conference.
  • Best adoption – startup
    Based on the performance gains, total cost savings, number of instances the workload is running on, and time taken to migrate the workload (faster is better), for companies with fewer than 100 employees. The winner will also receive a chance to present at the conference.
  • Best new workload adoption
    Awarded to a workload that’s new to EC2 (migrated to Graviton2 from on-premises, or other cloud) based on the performance gains, total cost savings, number of instances the workload is running on, and time taken to migrate the workload (faster is better). The winner will also receive a chance to participate in a video or written case study.
  • Most impactful adoption
    Awarded to the workload with the biggest social impact based on details provided about what the workload/application does. Applications in this category are related to fields such as sustainability, healthcare and life sciences, conservation, learning/education, justice/equity. The winner will also receive a chance to participate in a video or written case study.
  • Most innovative adoption
    Applications in this category solve unique problems for their customers, address new use cases, or are groundbreaking. The award will be based on the workload description, price performance gains, and total cost savings. The winner will also receive a chance to participate in a video or written case study.

https://aws.amazon.com/blogs/aws/migrate-your-workloads-with-the-graviton-challenge/