Académique Documents
Professionnel Documents
Culture Documents
Overview
About Azure networking
Architecture
Virtual Datacenters
Asymmetric routing with multiple network paths
Secure network designs
Hub-spoke topology
Network security best practices
Highly available network virtual appliances
Combine load balancing methods
Disaster recovery using Azure DNS and Traffic Manager
Plan and design
Virtual networks
Cross-premises connectivity - VPN
Cross-premises connectivity - dedicated private
Concepts
Virtual networks
Network load balancing
Application load balancing
DNS
DNS-based traffic distribution
Connect on-premises - VPN
Connect on-premises - dedicated
Get started
Create your first virtual network
How to
Internet connectivity
Network load balance public servers
Application load balance public servers
Protect web applications
Distribute traffic across locations
Internal connectivity
Network load balance private servers
Application load balance private servers
Connect virtual networks (same location)
Connect virtual networks (different locations)
Cross-premises connectivity
Create a S2S VPN connection (IPsec/IKE)
Create a P2S VPN connection (SSTP with certificates)
Create a dedicated private connection (ExpressRoute)
Management
Network monitoring overview
Check resource usage against Azure limits
View network topology
Sample scripts
Azure CLI
Azure PowerShell
Tutorials
Load balance VMs
Connect a computer to a virtual network
Reference
Azure CLI
Azure PowerShell
.Net
Node.js
REST
Resources
Author templates
Azure Roadmap
Community templates
Networking blog
Pricing
Pricing calculator
Regional availability
Stack Overflow
Videos
Azure networking
5/21/2018 • 13 minutes to read • Edit Online
Azure provides a variety of networking capabilities that can be used together or separately. Click any of the
following key capabilities to learn more about them:
Connectivity between Azure resources: Connect Azure resources together in a secure, private virtual network in
the cloud.
Internet connectivity: Communicate to and from Azure resources over the Internet.
On-premises connectivity: Connect an on-premises network to Azure resources through a virtual private
network (VPN ) over the Internet, or through a dedicated connection to Azure.
Load balancing and traffic direction: Load balance traffic to servers in the same location and direct traffic to
servers in different locations.
Security: Filter network traffic between network subnets or individual virtual machines (VM ).
Routing: Use default routing or fully control routing between your Azure and on-premises resources.
Manageability: Monitor and manage your Azure networking resources.
Deployment and configuration tools: Use a web-based portal or cross-platform command-line tools to deploy
and configure network resources.
Internet connectivity
All Azure resources connected to a VNet have outbound connectivity to the Internet by default. The private IP
address of the resource is source network address translated (SNAT) to a public IP address by the Azure
infrastructure. To learn more about outbound Internet connectivity, read the Understanding outbound connections
in Azure article.
To communicate inbound to Azure resources from the Internet, or to communicate outbound to the Internet
without SNAT, a resource must be assigned a public IP address. To learn more about public IP addresses, read the
Public IP addresses article.
On-premises connectivity
You can access resources in your VNet securely over either a VPN connection, or a direct private connection. To
send network traffic between your Azure virtual network and your on-premises network, you must create a virtual
network gateway. You configure settings for the gateway to create the type of connection that you want, either
VPN or ExpressRoute.
You can connect your on-premises network to a VNet using any combination of the following options:
Point-to-site (VPN over SSTP )
The following picture shows separate point to site connections between multiple computers and a VNet:
This connection is established between a single computer and a VNet. This connection type is great if you're just
getting started with Azure, or for developers, because it requires little or no changes to your existing network. It's
also convenient when you are connecting from a remote location such as a conference or home. Point-to-site
connections are often coupled with a site-to-site connection through the same virtual network gateway. The
connection uses the SSTP protocol to provide encrypted communication over the Internet between the computer
and the VNet. The latency for a point-to-site VPN is unpredictable, since the traffic traverses the Internet.
Site-to-site (IPsec/IKE VPN tunnel)
This connection is established between your on-premises VPN device and an Azure VPN Gateway. This
connection type enables any on-premises resource that you authorize to access the VNet. The connection is an
IPSec/IKE VPN that provides encrypted communication over the Internet between your on-premises device and
the Azure VPN gateway. You can connect multiple on-premises sites to the same VPN gateway. The on-premises
VPN device at each site must have an externally-facing public IP address that is not behind a NAT. The latency for a
site-to-site connection is unpredictable, since the traffic traverses the Internet.
ExpressRoute (dedicated private connection)
This type of connection is established between your network and Azure, through an ExpressRoute partner. This
connection is private. Traffic does not traverse the Internet. The latency for an ExpressRoute connection is
predictable, since traffic doesn't traverse the Internet. ExpressRoute can be combined with a site-to-site connection.
To learn more about all the previous connection options, read the Connection topology diagrams article.
The client connects directly to that endpoint. Azure Traffic Manager detects when an endpoint is unhealthy and
then redirects clients to a different, healthy endpoint. To learn more about Traffic Manager, read the Azure Traffic
Manager overview article.
Application load balancing
The Azure Application Gateway service provides application delivery controller (ADC ) as a service. Application
Gateway offers various Layer 7 (HTTP/HTTPS ) load-balancing capabilities for your applications, including a web
application firewall to protect your web applications from vulnerabilities and exploits. Application Gateway also
allows you to optimize web farm productivity by offloading CPU -intensive SSL termination to the application
gateway.
Other Layer 7 routing capabilities include round-robin distribution of incoming traffic, cookie-based session
affinity, URL path-based routing, and the ability to host multiple websites behind a single application gateway.
Application Gateway can be configured as an Internet-facing gateway, an internal-only gateway, or a combination
of both. Application Gateway is fully Azure managed, scalable, and highly available. It provides a rich set of
diagnostics and logging capabilities for better manageability. To learn more about Application Gateway, read the
Application Gateway overview article.
The following picture shows URL path-based routing with Application Gateway:
Network load balancing
The Azure Load Balancer provides high-performance, low -latency Layer 4 load-balancing for all UDP and TCP
protocols. It manages inbound and outbound connections. You can configure public and internal load-balanced
endpoints. You can define rules to map inbound connections to back-end pool destinations by using TCP and HTTP
health-probing options to manage service availability. To learn more about Load Balancer, read the Load Balancer
overview article.
The following picture shows an Internet-facing multi-tier application that utilizes both external and internal load
balancers:
Security
You can filter traffic to and from Azure resources using the following options:
Network: You can implement Azure network security groups (NSGs) to filter inbound and outbound traffic to
Azure resources. Each NSG contains one or more inbound and outbound rules. Each rule specifies the source
IP addresses, destination IP addresses, port, and protocol that traffic is filtered with. NSGs can be applied to
individual subnets and individual VMs. To learn more about NSGs, read the Network security groups overview
article.
Application: By using an Application Gateway with web application firewall you can protect your web
applications from vulnerabilities and exploits. Common examples are SQL injection attacks, cross site scripting,
and malformed headers. Application gateway filters out this traffic and stops it from reaching your web servers.
You are able to configure what rules you want enabled. The ability to configure SSL negotiation policies is
provided to allow certain policies to be disabled. To learn more about the web application firewall, read the Web
application firewall article.
If you need network capability Azure doesn't provide, or want to use network applications you use on-premises,
you can implement the products in VMs and connect them to your VNet. The Azure Marketplace contains several
different VMs pre-configured with network applications you may currently use. These pre-configured VMs are
typically referred to as network virtual appliances (NVA). NVAs are available with applications such as firewall and
WAN optimization.
Routing
Azure creates default route tables that enable resources connected to any subnet in any VNet to communicate with
each other. You can implement either or both of the following types of routes to override the default routes Azure
creates:
User-defined: You can create custom route tables with routes that control where traffic is routed to for each
subnet. To learn more about user-defined routes, read the User-defined routes article.
Border gateway protocol (BGP ): If you connect your VNet to your on-premises network using an Azure
VPN Gateway or ExpressRoute connection, you can propagate BGP routes to your VNets. BGP is the standard
routing protocol commonly used in the Internet to exchange routing and reachability information between two
or more networks. When used in the context of Azure Virtual Networks, BGP enables the Azure VPN Gateways
and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that inform both
gateways on the availability and reachability for those prefixes to go through the gateways or routers involved.
BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns
from one BGP peer to all other BGP peers. To learn more about BGP, see the BGP with Azure VPN Gateways
overview article.
Manageability
Azure provides the following tools to monitor and manage networking:
Activity logs: All Azure resources have activity logs which provide information about operations taken place,
status of operations and who initiated the operation. To learn more about activity logs, read the Activity logs
overview article.
Diagnostic logs: Periodic and spontaneous events are created by network resources and logged in Azure
storage accounts, sent to an Azure Event Hub, or sent to Azure Log Analytics. Diagnostic logs provide insight to
the health of a resource. Diagnostic logs are provided for Load Balancer (Internet-facing), Network Security
Groups, routes, and Application Gateway. To learn more about diagnostic logs, read the Diagnostic logs
overview article.
Metrics: Metrics are performance measurements and counters collected over a period of time on resources.
Metrics can be used to trigger alerts based on thresholds. Currently metrics are available on Application
Gateway. To learn more about metrics, read the Metrics overview article.
Troubleshooting: Troubleshooting information is accessible directly in the Azure portal. The information helps
diagnose common problems with ExpressRoute, VPN Gateway, Application Gateway, Network Security Logs,
Routes, DNS, Load Balancer, and Traffic Manager.
Role-based access control (RBAC ): Control who can create and manage networking resources with role-
based access control (RBAC ). Learn more about RBAC by reading the Get started with RBAC article.
Packet capture: The Azure Network Watcher service provides the ability to run a packet capture on a VM
through an extension within the VM. This capability is available for Linux and Windows VMs. To learn more
about packet capture, read the Packet capture overview article.
Verify IP flows: Network Watcher allows you to verify IP flows between an Azure VM and a remote resource
to determine whether packets are allowed or denied. This capability provides administrators the ability to
quickly diagnose connectivity issues. To learn more about how to verify IP flows, read the IP flow verify
overview article.
Troubleshoot VPN connectivity: The VPN troubleshooter capability of Network Watcher provides the ability
to query a connection or gateway and verify the health of the resources. To learn more about troubleshooting
VPN connections, read the VPN connectivity troubleshooting overview article.
View network topology: View a graphical representation of the network resources in a VNet with Network
Watcher. To learn more about viewing network topology, read the Topology overview article.
Pricing
Some of the Azure networking services have a charge, while others are free. View the Virtual network, VPN
Gateway, Application Gateway, Load Balancer, Network Watcher, DNS, Traffic Manager and ExpressRoute pricing
pages for more information.
Next steps
Create your first VNet, and connect a few VMs to it, by completing the steps in the Create your first virtual
network article.
Connect your computer to a VNet by completing the steps in the Configure a point-to-site connection article.
Load balance Internet traffic to public servers by completing the steps in the Create an Internet-facing load
balancer article.
Microsoft Azure Virtual Datacenter: A Network
Perspective
5/8/2018 • 34 minutes to read • Edit Online
Microsoft Azure: Move faster, Save money, Integrate on-premises apps and data
Overview
Migrating on-premises applications to Azure, even without any significant changes (an approach known as “lift and
shift”), provides organizations the benefits of a secured and cost-efficient infrastructure. However, to make the most
of the agility possible with cloud computing, enterprises should evolve their architectures to take advantage of
Azure capabilities. Microsoft Azure delivers hyper-scale services and infrastructure, enterprise-grade capabilities
and reliability, and many choices for hybrid connectivity. Customers can choose to access these cloud services
either via the Internet or with Azure ExpressRoute, which provides private network connectivity. The Microsoft
Azure platform allows customers to seamlessly extend their infrastructure into the cloud and build multi-tier
architectures. Additionally, Microsoft partners provide enhanced capabilities by offering security services and
virtual appliances that are optimized to run in Azure.
This article provides an overview of patterns and designs that can be used to solve the architectural scale,
performance, and security concerns many customers face when thinking about moving en masse to the cloud. An
overview of how to fit different organizational IT roles into the management and governance of the system is also
discussed, with emphasis to security requirements and cost optimization.
NOTE
It's important to understand that the vDC is NOT a discrete Azure product, but the combination of various features and
capabilities to meet your exact requirements. vDC is a way of thinking about your workloads and Azure usage to maximize
your resources and abilities in the cloud. The virtual DC is therefore a modular approach on how to build up IT services in the
Azure, respecting organizational roles and responsibilities.
The vDC can help enterprises get workloads and applications into Azure for the following scenarios:
Hosting multiple related workloads
Migrating workloads from an on-premises environment to Azure
Implementing shared or centralized security and access requirements across workloads
Mixing DevOps and Centralized IT appropriately for a large enterprise
The key to unlock the advantages of vDC, is a centralized topology (hub and spokes) with a mix of Azure features:
Azure VNet, NSGs, VNet Peering, User-Defined Routes (UDR ), and Azure Identity with Role Base Access Control
(RBAC ).
Who Needs a Virtual Data Center?
Any Azure customer that needs to move more than a couple of workloads into Azure can benefit from thinking
about using common resources. Depending on the magnitude, even single applications can benefit from using the
patterns and components used to build a vDC.
If your organization has a centralized IT, Network, Security, and/or Compliance team/department, a vDC can help
enforce policy points, segregation of duty, and ensure uniformity of the underlying common components while
giving application teams as much freedom and control as is appropriate for your requirements.
Organizations that are looking to DevOps can utilize the vDC concepts to provide authorized pockets of Azure
resources and ensure they have total control within that group (either subscription or resource group in a common
subscription), but the network and security boundaries stay compliant as defined by a centralized policy in a hub
VNet and Resource Group.
Identity and Directory services are a key aspect of all data centers, both on-premises and in the cloud. Identity is
related to all aspects of access and authorization to services within the vDC. To help ensure that only authorized
users and processes access your Azure Account and resources, Azure uses several types of credentials for
authentication. These include passwords (to access the Azure account), cryptographic keys, digital signatures, and
certificates. Azure Multi-Factor Authentication (MFA) is an additional layer of security for accessing Azure services.
Azure MFA provides strong authentication with a range of easy verification options—phone call, text message, or
mobile app notification—and allow customers to choose the method they prefer.
Any large enterprise needs to define an identity management process that describes the management of individual
identities, their authentication, authorization, roles, and privileges within or across the vDC. The goals of this
process should be to increase security and productivity while decreasing cost, downtime, and repetitive manual
tasks.
Enterprise/organizations can require a demanding mix of services for different Line-of-Businesses (LOBs), and
employees often have different roles when involved with different projects. A vDC requires good cooperation
between different teams, each with specific role definitions, to get systems running with good governance. The
matrix of responsibilities, access, and rights can be extremely complex. Identity management in vDC is
implemented through Azure Active Directory (AAD ) and Role-Based Access Control (RBAC ).
A Directory Service is a shared information infrastructure for locating, managing, administering, and organizing
everyday items and network resources. These resources can include volumes, folders, files, printers, users, groups,
devices, and other objects. Each resource on the network is considered an object by the directory server.
Information about a resource is stored as a collection of attributes associated with that resource or object.
All Microsoft online business services rely on Azure Active Directory (AAD ) for sign-in and other identity needs.
Azure Active Directory is a comprehensive, highly available identity and access management cloud solution that
combines core directory services, advanced identity governance, and application access management. AAD can be
integrated with on-premises Active Directory to enable single sign-on for all cloud-based and locally hosted (on-
premises) applications. The user attributes of on-premises Active Directory can be automatically synchronized to
AAD.
A single global administrator is not required to assign all permissions in a vDC. Instead each specific department
(or group of users or services in the Directory Service) can have the permissions required to manage their own
resources within a vDC. Structuring permissions requires balancing. Too many permissions can impede
performance efficiency, and too few or loose permissions can increase security risks. Azure Role-Based Access
Control (RBAC ) helps to address this problem, by offering fine-grained access management for vDC resources.
Se cu r i ty I n fr a s tr u ctu r e
Security infrastructure, in the context of a vDC, is mainly related to the segregation of traffic in the vDC's specific
virtual network segment, and how to control ingress and egress flows throughout the vDC. Azure is based on
multi-tenant architecture that prevents unauthorized and unintentional traffic between deployments, using Virtual
Network (VNet) isolation, access control lists (ACLs), load balancers, and IP filters, along with traffic flow policies.
Network address translation (NAT) separates internal network traffic from external traffic.
The Azure fabric allocates infrastructure resources to tenant workloads and manages communications to and from
virtual machines (VMs). The Azure hypervisor enforces memory and process separation between VMs and
securely routes network traffic to guest OS tenants.
Co n n e cti v i ty to th e cl o u d
The vDC needs connectivity with external networks to offer services to customers, partners and/or internal users.
This usually means connectivity not only to the Internet, but also to on-premises networks and data centers.
Customers can build their security policies to control what and how specific vDC hosted services are accessible
to/from the Internet using Network Virtual Appliances (with filtering and traffic inspection), and custom routing
policies and network filtering (User Defined Routing and Network Security Groups).
Enterprises often need to connect vDCs to on-premises data centers or other resources. The connectivity between
Azure and on-premises networks is therefore a crucial aspect when designing an effective architecture. Enterprises
have two different ways to create an interconnection between vDC and on-premises in Azure: transit over the
Internet and/or by private direct connections.
An Azure Site-to-Site VPN is an interconnection service over the Internet between on-premises networks and
the vDC, established through secure encrypted connections (IPsec/IKE tunnels). Azure Site-to-Site connection is
flexible, quick to create, and does not require any further procurement, as all connections connect over the internet.
ExpressRoute is an Azure connectivity service that lets you create private connections between vDC and the on-
premises networks. ExpressRoute connections do not go over the public Internet, and offer higher security,
reliability, and higher speeds (up to 10 Gbps) along with consistent latency. ExpressRoute is very useful for vDCs,
as ExpressRoute customers can get the benefits of compliance rules associated with private connections.
Deploying ExpressRoute connections involves engaging with an ExpressRoute service provider. For customers that
need to start quickly, it is common to initially use Site-to-Site VPN to establish connectivity between the vDC and
on-premises resources, and then migrate to ExpressRoute connection.
Co n n e cti v i ty w i th i n th e cl o u d
VNets and VNet Peering are the basic networking connectivity services inside a vDC. A VNet guarantees a natural
boundary of isolation for vDC resources, and VNet peering allows intercommunication between different VNets
within the same Azure region or even across regions. Traffic control inside a VNet and between VNets need to
match a set of security rules specified through Access Control Lists (Network Security Group), Network Virtual
Appliances, and custom routing tables (UDR ).
In Azure, every component, whatever the type, is deployed in an Azure Subscription. The isolation of Azure
components in different Azure subscriptions can satisfy the requirements of different LOBs, such as setting up
differentiated levels of access and authorization.
A single vDC can scale up to large number of spokes, although, as with every IT system, there are platforms limits.
The hub deployment is bound to a specific Azure subscription, which has restrictions and limits (for example, a max
number of VNet peerings - see Azure subscription and service limits, quotas, and constraints for details). In cases
where limits may be an issue, the architecture can scale up further by extending the model from a single hub-
spokes to a cluster of hub and spokes. Multiple hubs in one or more Azure regions can be interconnected using
VNet Peering, ExpressRoute, or site-to-site VPN.
The introduction of multiple hubs increases the cost and management effort of the system and would only be
justified by scalability (examples: system limits or redundancy) and regional replication (examples: end-user
performance or disaster recovery). In scenarios requiring multiple hubs, all the hubs should strive to offer the same
set of services for operational ease.
I n t e r c o n n e c t i o n b e t w e e n sp o k e s
Inside a single spoke, it is possible to implement complex multi-tiers workloads. Multi-tier configurations can be
implemented using subnets (one for every tier) in the same VNet and filtering the flows using NSGs.
On the other hand, an architect may want to deploy a multi-tier workload across multiple VNets. Using VNet
peering, spokes can connect to other spokes in the same hub or different hubs. A typical example of this scenario is
the case where application processing servers are in one spoke (VNet), while the database is deployed in a different
spoke (VNet). In this case, it is easy to interconnect the spokes with VNet peering and thereby avoid transiting
through the hub. A careful architecture and security review should be performed to ensure that bypassing the hub
doesn’t bypass important security or auditing points that may only exist in the hub.
Spokes can also be interconnected to a spoke that acts as a hub. This approach creates a two-level hierarchy: the
spoke in the higher level (level 0) become the hub of lower spokes (level 1) of the hierarchy. The spokes of vDC
need to forward the traffic to the central hub to reach out either to the on-premises network or internet. An
architecture with two levels of hub introduces complex routing that removes the benefits of a simple hub-spoke
relationship.
Although Azure allows complex topologies, one of the core principles of the vDC concept is repeatability and
simplicity. To minimize management effort, the simple hub-spoke design is the recommended vDC reference
architecture.
Components
A Virtual Data Center is made up of four basic component types: Infrastructure, Perimeter Networks,
Workloads, and Monitoring.
Each component type consists of various Azure features and resources. Your vDC is made up of instances of
multiple components types and multiple variations of the same component type. For instance, you may have many
different, logically separated, workload instances that represent different applications. You use these different
component types and instances to ultimately build the vDC.
The preceding high-level architecture of a vDC shows different component types used in different zones of the
hub-spokes topology. The diagram shows infrastructure components in various parts of the architecture.
As a good practice (for an on-premises DC or vDC ) access rights and privileges should be group-based. Dealing
with groups, instead of individual users helps maintaining access policies consistently across teams and aids in
minimizing configuration errors. Assigning and removing users to and from appropriate groups helps keeping the
privileges of a specific user up-to-date.
Each role group should have a unique prefix on their names making it easy to identify which group is associated
with which workload. For instance, a workload hosting an authentication service might have groups named
AuthServiceNetOps, AuthServiceSecOps, AuthServiceDevOps, and AuthServiceInfraOps. Likewise for centralized
roles, or roles not related to a specific service, could be prefaced with “Corp”, CorpNetOps for example.
Many organizations use a variation of the following groups to provide a major breakdown of roles:
The central IT group (Corp ) has the ownership rights to control infrastructure (such as networking and security)
components, and therefore needs to have the role of contributor on the subscription (and have control of the
hub) and network contributor rights in the spokes. Large organization frequently split up these management
responsibilities between multiple teams such as; a Network Operations (CorpNetOps) group (with exclusive
focus on networking) and a Security Operations (CorpSecOps) group (responsible for firewall and security
policy). In this specific case, two different groups need to be created for assignment of these custom roles.
The dev & test (AppDevOps) group has the responsibility to deploy workloads (Apps or Services). This group
takes the role of Virtual Machine Contributor for IaaS deployments and/or one or more PaaS contributor’s
roles (see Built-in roles for Azure Role-Based Access Control). Optionally the dev & test team may need to have
visibility on security policies (NSGs) and routing policies (UDR ) inside the hub or a specific spoke. Therefore, in
addition to the roles of contributor for workloads, this group would also need the role of Network Reader.
The operation and maintenance group (CorpInfraOps or AppInfraOps) have the responsibility of managing
workloads in production. This group needs to be a subscription contributor on workloads in any production
subscriptions. Some organizations might also evaluate if they need an additional escalation support team group
with the role of subscription contributor in production and in the central hub subscription, in order to fix
potential configuration issues in the production environment.
A vDC is structured so that groups created for the central IT groups managing the hub have corresponding groups
at the workload level. In addition to managing hub resources only the central IT groups would be able to control
external access and top-level permissions on the subscription. However, workload groups would be able to control
resources and permissions of their VNet independently on Central IT.
The vDC needs to be partitioned to securely host multiple projects across different Line-of-Businesses (LOBs). All
projects require different isolated environments (Dev, UAT, production). Separate Azure subscriptions for each of
these environments provide natural isolation.
The preceding diagram shows the relationship between an organization's projects, users, groups, and the
environments where the Azure components are deployed.
Typically in IT, an environment (or tier) is a system in which multiple applications are deployed and executed. Large
enterprises use a development environment (where changes originally made and tested) and a production
environment (what end-users use). Those environments are separated, often with several staging environments in
between them to allow phased deployment (rollout), testing, and rollback in case of problems. Deployment
architectures vary significantly, but usually the basic process of starting at development (DEV ) and ending at
production (PROD ) is still followed.
A common architecture for these types of multi-tier environments consists of DevOps (development and testing),
UAT (staging), and production environments. Organizations can leverage single or multiple Azure AD tenants to
define access and rights to these environments. The previous diagram shows a case where two different Azure AD
tenants are used: one for DevOps and UAT, and the other exclusively for production.
The presence of different Azure AD tenants enforces the separation between environments. The same group of
users (as an example, Central IT) needs to authenticate using a different URI to access a different AD tenant modify
the roles or permissions of either the DevOps or production environments of a project. The presence of different
user authentication to access different environments reduces possible outages and other issues caused by human
errors.
Component Type: Infrastructure
This component type is where most of the supporting infrastructure resides. It's also where your centralized IT,
Security, and/or Compliance teams spend most of their time.
Infrastructure components provide an interconnection between the different components of a vDC, and are present
in both the hub and the spokes. The responsibility for managing and maintaining the infrastructure components is
typically assigned to the central IT and/or security team.
One of the primary tasks of the IT infrastructure team is to guarantee the consistency of IP address schemas across
the enterprise. The private IP address space assigned to the vDC needs to be consistent and NOT overlapping with
private IP addresses assigned on your on-premises networks.
While NAT on the on-premises edge routers or in Azure environments can avoid IP address conflicts, it adds
complications to your infrastructure components. Simplicity of management is one of the key goals of vDC, so
using NAT to handle IP concerns is not a recommended solution.
Infrastructure components contain the following functionality:
Identity and directory services. Access to every resource type in Azure is controlled by an identity stored in a
directory service. The directory service stores not only the list of users, but also the access rights to resources in
a specific Azure subscription. These services can exist cloud-only, or they can be synchronized with on-premises
identity stored in Active Directory.
Virtual Network. Virtual Networks are one of main components of a vDC, and enable you to create a traffic
isolation boundary on the Azure platform. A Virtual Network is composed of a single or multiple virtual
network segments, each with a specific IP network prefix (a subnet). The Virtual Network defines an internal
perimeter area where IaaS virtual machines and PaaS services can establish private communications. VMs (and
PaaS services) in one virtual network cannot communicate directly to VMs (and PaaS services) in a different
virtual network, even if both virtual networks are created by the same customer, under the same subscription.
Isolation is a critical property that ensures customer VMs and communication remains private within a virtual
network.
UDR. Traffic in a Virtual Network is routed by default based on the system routing table. A User Define Route is
a custom routing table that network administrators can associate to one or more subnets to overwrite the
behavior of the system routing table and define a communication path within a virtual network. The presence of
UDRs guarantees that egress traffic from the spoke transit through specific custom VMs and/or Network
Virtual Appliances and load balancers present in the hub and in the spokes.
NSG. A Network Security Group is a list of security rules that act as traffic filtering on IP Sources, IP
Destination, Protocols, IP Source Ports, and IP Destination ports. The NSG can be applied to a subnet, a Virtual
NIC card associated with an Azure VM, or both. The NSGs are essential to implement a correct flow control in
the hub and in the spokes. The level of security afforded by the NSG is a function of which ports you open, and
for what purpose. Customers should apply additional per-VM filters with host-based firewalls such as IPtables
or the Windows Firewall.
DNS. The name resolution of resources in the VNets of a vDC is provided through DNS. Azure provides DNS
services for both Public and Private name resolution. Private zones provide name resolution both within a
virtual network and across virtual networks. You can have private zones not only span across virtual networks in
the same region, but also across regions and subscriptions. For public resolution, Azure DNS provides a hosting
service for DNS domains, providing name resolution using Microsoft Azure infrastructure. By hosting your
domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and billing as
your other Azure services.
Subscription and Resource Group Management. A subscription defines a natural boundary to create
multiple groups of resources in Azure. Resources in a subscription are assembled together in logical containers
named Resource Groups. The Resource Group represents a logical group to organize the resources of a vDC.
RBAC. Through RBAC, it is possible to map organizational role along with rights to access specific Azure
resources, allowing you to restrict users to only a certain subset of actions. With RBAC, you can grant access by
assigning the appropriate role to users, groups, and applications within the relevant scope. The scope of a role
assignment can be an Azure subscription, a resource group, or a single resource. RBAC allows inheritance of
permissions. A role assigned at a parent scope also grants access to the children contained within it. Using
RBAC, you can segregate duties and grant only the amount of access to users that they need to perform their
jobs. For example, use RBAC to let one employee manage virtual machines in a subscription, while another can
manage SQL DBs within the same subscription.
VNet Peering. The fundamental feature used to create the infrastructure of a vDC is VNet Peering, a
mechanism that connects two virtual networks (VNets) in the same region through the Azure data center
network, or using the Azure world-wide backbone across regions.
Component Type: Perimeter Networks
Perimeter network components (also known as a DMZ network) allow you to provide network connectivity with
your on-premises or physical data center networks, along with any connectivity to and from the Internet. It's also
where your network and security teams likely spend most of their time.
Incoming packets should flow through the security appliances in the hub, such as the firewall, IDS, and IPS, before
reaching the back-end servers in the spokes. Internet-bound packets from the workloads should also flow through
the security appliances in the perimeter network for policy enforcement, inspection, and auditing purposes, before
leaving the network.
Perimeter network components provide the following features:
Virtual Networks, UDR, NSG
Network Virtual Appliance
Load Balancer
Application Gateway / WAF
Public IPs
Usually, the central IT and security teams have responsibility for requirement definition and operations of the
perimeter networks.
The preceding diagram shows the enforcement of two perimeters with access to the internet and an on-premises
network, both resident in the hub. In a single hub, the perimeter network to internet can scale up to support large
numbers of LOBs, using multiple farms of Web Application Firewalls (WAFs) and/or firewalls.
Virtual Networks The hub is typically built on a VNet with multiple subnets to host the different type of services
filtering and inspecting traffic to or from the internet via NVAs, WAFs, and Azure Application Gateways.
UDR Using UDR, customers can deploy firewalls, IDS/IPS, and other virtual appliances, and route network traffic
through these security appliances for security boundary policy enforcement, auditing, and inspection. UDRs can be
created in both the hub and the spokes to guarantee that traffic transits through the specific custom VMs, Network
Virtual Appliances and load balancers used by the vDC. To guarantee that traffic generated from VMs resident in
the spoke transit to the correct virtual appliances, a UDR needs to be set in the subnets of the spoke by setting the
front-end IP address of the internal load balancer as the next-hop. The internal load balancer distributes the internal
traffic to the virtual appliances (load balancer back-end pool).
Network Virtual Appliances In the hub, the perimeter network with access to the internet is normally managed
through a farm of firewalls and/or Web Application Firewalls (WAFs).
Different LOBs commonly use many web applications, and these applications tend to suffer from various
vulnerabilities and potential exploits. Web Applications Firewalls are a special breed of product used to detect
attacks against web applications (HTTP/HTTPS ) in more depth than a generic firewall. Compared with tradition
firewall technology, WAFs have a set of specific features to protect internal web servers from threats.
A firewall farm is group of firewalls working in tandem under the same common administration, with a set of
security rules to protect the workloads hosted in the spokes, and control access to on-premises networks. A firewall
farm has less specialized software compared with a WAF, but has a broad application scope to filter and inspect any
type of traffic in egress and ingress. Firewall farms are normally implemented in Azure through Network Virtual
Appliances (NVAs), which are available in the Azure marketplace.
It is recommended to use one set of NVAs for traffic originating on the Internet, and another for traffic originating
on-premises. Using only one set of NVAs for both is a security risk, as it provides no security perimeter between
the two sets of network traffic. Using separate NVAs reduces the complexity of checking security rules, and makes
it clear which rules correspond to which incoming network request.
Most large enterprises manage multiple domains. Azure DNS can be used to host the DNS records for a particular
domain. As example, the Virtual IP Address (VIP ) of the Azure external load balancer (or the WAFs) can be
registered in the A record of an Azure DNS record.
Azure Load Balancer Azure load balancer offers a high availability Layer 4 (TCP, UDP ) service, which can
distribute incoming traffic among service instances defined in a load-balanced set. Traffic sent to the load balancer
from front-end endpoints (public IP endpoints or private IP endpoints) can be redistributed with or without address
translation to a set of back-end IP address pool (examples being; Network Virtual Appliances or VMs).
Azure Load Balancer can probe the health of the various server instances as well, and when a probe fails to
respond the load balancer stops sending traffic to the unhealthy instance. In a vDC, we have the presence of an
external load balancer in the hub (for instance, balance the traffic to NVAs), and in the spokes (to perform tasks like
balancing traffic between different VMs of a multitier application).
Application Gateway Microsoft Azure Application Gateway is a dedicated virtual appliance providing application
delivery controller (ADC ) as a service, offering various layer 7 load balancing capabilities for your application. It
allows you to optimize web farm productivity by offloading CPU intensive SSL termination to the application
gateway. It also provides other layer 7 routing capabilities including round robin distribution of incoming traffic,
cookie-based session affinity, URL path-based routing, and the ability to host multiple websites behind a single
Application Gateway. A web application firewall (WAF ) is also provided as part of the application gateway WAF
SKU. This SKU provides protection to web applications from common web vulnerabilities and exploits. Application
Gateway can be configured as internet facing gateway, internal only gateway, or a combination of both.
Public IPs Some Azure features enable you to associate service endpoints to a public IP address that allows to
your resource to be accessed from the internet. This endpoint uses Network Address Translation (NAT) to route
traffic to the internal address and port on the Azure virtual network. This path is the primary way for external
traffic to pass into the virtual network. The Public IP addresses can be configured to determine which traffic is
passed in, and how and where it's translated on to the virtual network.
Component Type: Monitoring
Monitoring components provide visibility and alerting from all the other components types. All teams should have
access to monitoring for the components and services they have access to. If you have a centralized help desk or
operations teams, they would need to have integrated access to the data provided by these components.
Azure offers different types of logging and monitoring services to track the behavior of Azure hosted resources.
Governance and control of workloads in Azure is based not just on collecting log data, but also the ability to trigger
actions based on specific reported events.
Azure Monitor - Azure includes multiple services that individually perform a specific role or task in the
monitoring space. Together, these services deliver a comprehensive solution for collecting, analyzing, and acting on
telemetry from your application and the Azure resources that support them. They can also work to monitor critical
on-premises resources in order to provide a hybrid monitoring environment. Understanding the tools and data
that are available is the first step in developing a complete monitoring strategy for your application.
There are two major types of logs in Azure:
Activity Logs (referred also as "Operational Log") provide insight into the operations that were performed
on resources in the Azure subscription. These logs report the control-plane events for your subscriptions.
Every Azure resource produces audit logs.
Azure Diagnostic Logs are logs generated by a resource that provide rich, frequent data about the
operation of that resource. The content of these logs varies by resource type.
In a vDC, it is extremely important to track the NSGs logs, particularly this information:
Event logs: provides information on what NSG rules are applied to VMs and instance roles based on MAC
address.
Counter logs: tracks how many times each NSG rule was executed to deny or allow traffic.
All logs can be stored in Azure Storage Accounts for audit, static analysis, or backup purposes. When the logs are
stored in an Azure storage account, customers can use different types of frameworks to retrieve, prep, analyze, and
visualize this data to report the status and health of cloud resources.
Large enterprises should already have acquired a standard framework for monitoring on-premises systems and
can extend that framework to integrate logs generated by cloud deployments. For organizations that wish to keep
all the logging in the cloud, [Log Analytics][LogAnalytics] is a great choice. Since Log Analytics is implemented as a
cloud-based service, you can have it up and running quickly with minimal investment in infrastructure services.
Log Analytics can also integrate with System Center components such as System Center Operations Manager to
extend your existing management investments into the cloud.
Log Analytics is a service in Azure that helps collect, correlate, search, and act on log and performance data
generated by operating systems, applications, and infrastructure cloud components. It gives customers real-time
operational insights using integrated search and custom dashboards to analyze all the records across all your
workloads in a vDC.
The Network Performance Monitor (NPM ) solution inside OMS can provide detailed network information end-to-
end, including a single view of your Azure networks and on-premises networks. With specific monitors for
ExpressRoute and public services.
Component Type: Workloads
Workload components are where your actual applications and services reside. It's also where your application
development teams spend most of their time.
The workload possibilities are truly endless. The following are just a few of the possible workload types:
Internal LOB Applications
Line-of-business applications are computer applications critical to the ongoing operation of an enterprise. LOB
applications have some common characteristics:
Interactive. LOB applications are interactive by nature: data is entered, and result/reports are returned.
Data driven. LOB applications are data intensive with frequent access to the databases or other storage.
Integrated. LOB applications offer integration with other systems within or outside the organization.
Customer facing web sites (Internet or Internal facing) Most applications that interact with the Internet are
web sites. Azure offers the capability to run a web site on an IaaS VM or from an Azure Web Apps site (PaaS ).
Azure Web Apps support integration with VNets that allow the deployment of the Web Apps in the spoke of a
vDC. When looking at internal facing web sites, with the VNET integration, you don't need to expose an Internet
endpoint for your applications but can use the resources via private non-internet routable addresses from your
private VNet instead.
Big Data/Analytics When data needs to scale up to a very large volume, databases may not scale up properly.
Hadoop technology offers a system to run distributed queries in parallel on large number of nodes. Customers
have the option to run data workloads in IaaS VMs or PaaS (HDInsight). HDInsight supports deploying into a
location-based VNet, can be deployed to a cluster in a spoke of the vDC.
Events and Messaging Azure Event Hubs is a hyper-scale telemetry ingestion service that collects, transforms,
and stores millions of events. As a distributed streaming platform, it offers low latency and configurable time
retention, enabling you to ingest massive amounts of telemetry into Azure and read that data from multiple
applications. With Event Hubs, a single stream can support both real-time and batch-based pipelines.
A highly reliable cloud messaging service between applications and services, can be implemented through Azure
Service Bus that offers asynchronous brokered messaging between client and server, along with structured first-in-
first-out (FIFO ) messaging and publish/subscribe capabilities.
Multiple vDC
So far, this article has focused on a single vDC, describing the basic components and architecture that contribute to
a resilient vDC. Azure features such as Azure load balancer, NVAs, availability sets, scale sets, along with other
mechanisms contribute to a system that allow you to build solid SL A levels into your production services.
However, a single vDC is hosted within a single region, and is vulnerable to major outage that might affect that
entire region. Customers that want to achieve high SL As need to protect the services through deployments of the
same project in two (or more) vDCs, placed in different regions.
In addition to SL A concerns, there are several common scenarios where deploying multiple vDCs makes sense:
Regional/Global presence
Disaster Recovery
Mechanism to divert traffic between DC
Regional/Global presence
Azure data centers are present in numerous regions worldwide. When selecting multiple Azure data centers,
customers need to consider two related factors: geographical distances and latency. Customers need to evaluate
the geographical distance between the vDCs and the distance between the vDC and the end users, to offer the best
user experience.
The Azure Region where vDCs are hosted also need to conform with regulatory requirements established by any
legal jurisdiction under which your organization operates.
Disaster Recovery
The implementation of a disaster recovery plan is strongly related to the type of workload concerned, and the
ability to synchronize the workload state between different vDCs. Ideally, most customers want to synchronize
application data between deployments running in two different vDCs to implement a fast fail-over mechanism.
Most applications are sensitive to latency, and that can cause potential timeout and delay in data synchronization.
Synchronization or heartbeat monitoring of applications in different vDCs requires communication between them.
Two vDCs in different regions can be connected through:
VNet Peering - VNet Peering can connect hubs across regions
ExpressRoute private peering when the vDC hubs are connected to the same ExpressRoute circuit
multiple ExpressRoute circuits connected via your corporate backbone and your vDC mesh connected to the
ExpressRoute circuits
Site-to-Site VPN connections between your vDC hubs in each Azure Region
Usually VNet Peering or ExpressRoute connections are the preferred mechanism due higher bandwidth and
consistent latency when transiting through the Microsoft backbone.
There is no magic recipe to validate an application distributed between two (or more) different vDCs located in
different regions. Customers should run network qualification tests to verify the latency and bandwidth of the
connections and target whether synchronous or asynchronous data replication is appropriate and what the optimal
recovery time objective (RTO ) can be for your workloads.
Mechanism to divert traffic between DC
One effective technique to divert the traffic incoming in one DC to another is based on DNS. Azure Traffic
Manager uses the Domain Name System (DNS ) mechanism to direct the end-user traffic to the most appropriate
public endpoint in a specific vDC. Through probes, Traffic Manager periodically checks the service health of public
endpoints in different vDCs and, in case of failure of those endpoints, it routes automatically to the secondary vDC.
Traffic Manager works on Azure public endpoints and can be used, for example, to control/divert traffic to Azure
VMs and Web Apps in the appropriate vDC. Traffic Manager is resilient even in the face of an entire Azure region
failing and can control the distribution of user traffic for service endpoints in different vDCs based on several
criteria (for instance, failure of a service in a specific vDC, or selecting the vDC with the lowest network latency for
the client).
Conclusion
The Virtual Data Center is an approach to data center migration into the cloud that uses a combination of features
and capabilities to create a scalable architecture in Azure that maximizes cloud resource use, reducing costs, and
simplifying system governance. The vDC concept is based on a hub-spokes topology, providing common shared
services in the hub and allowing specific applications/workloads in the spokes. A vDC matches the structure of
company roles, where different departments (Central IT, DevOps, operation and maintenance) work together, each
with a specific list of roles and duties. A vDC satisfies the requirements for a "Lift and Shift" migration, but also
provides many advantages to native cloud deployments.
References
The following features were discussed in this document. Click the links to learn more.
Next Steps
Explore VNet Peering, the underpinning technology for vDC hub and spoke designs
Implement AAD to get started with RBAC exploration
Develop a Subscription and Resource management model and RBAC model to meet the structure,
requirements, and polices of your organization. The most important activity is planning. As much as practical,
plan for reorganizations, mergers, new product lines, etc.
Asymmetric routing with multiple network paths
6/27/2017 • 6 minutes to read • Edit Online
This article explains how forward and return network traffic might take different routes when multiple paths are
available between network source and destination.
It's important to understand two concepts to understand asymmetric routing. One is the effect of multiple network
paths. The other is how devices, like a firewall, keep state. These types of devices are called stateful devices. A
combination of these two factors creates scenarios in which network traffic is dropped by a stateful device because
the stateful device didn't detect that traffic originated with the device itself.
Although it primarily occurs on the Internet, asymmetric routing also applies to other combinations of multiple
paths. It applies, for example, both to an Internet path and a private path that go to the same destination, and to
multiple private paths that go to the same destination.
Each router along the way, from source to destination, computes the best path to reach a destination. The router's
determination of best possible path is based on two main factors:
Routing between external networks is based on a routing protocol, Border Gateway Protocol (BGP ). BGP takes
advertisements from neighbors and runs them through a series of steps to determine the best path to the
intended destination. It stores the best path in its routing table.
The length of a subnet mask associated with a route influences routing paths. If a router receives multiple
advertisements for the same IP address but with different subnet masks, the router prefers the advertisement
with a longer subnet mask because it's considered a more specific route.
Stateful devices
Routers look at the IP header of a packet for routing purposes. Some devices look even deeper inside the packet.
Typically, these devices look at Layer4 (Transmission Control Protocol, or TCP; or User Datagram Protocol, or
UDP ), or even Layer7 (Application Layer) headers. These kinds of devices are either security devices or bandwidth-
optimization devices.
A firewall is a common example of a stateful device. A firewall allows or denies a packet to pass through its
interfaces based on various fields such as protocol, TCP/UDP port, and URL headers. This level of packet
inspection puts a heavy processing load on the device. To improve performance, the firewall inspects the first
packet of a flow. If it allows the packet to proceed, it keeps the flow information in its state table. All subsequent
packets related to this flow are allowed based on the initial determination. A packet that is part of an existing flow
might arrive at the firewall. If the firewall has no prior state information about it, the firewall drops the packet.
Then, you turn on ExpressRoute and consume services offered by Microsoft over ExpressRoute. All other services
from Microsoft are consumed over the Internet. You deploy a separate firewall at your edge that is connected to
ExpressRoute. Microsoft advertises more specific prefixes to your network over ExpressRoute for specific services.
Your routing infrastructure chooses ExpressRoute as the preferred path for those prefixes. If you are not
advertising your public IP addresses to Microsoft over ExpressRoute, Microsoft communicates with your public IP
addresses via the Internet. Forward traffic from your network to Microsoft uses ExpressRoute, and reverse traffic
from Microsoft uses the Internet. When the firewall at the edge sees a response packet for a flow that it does not
find in the state table, it drops the return traffic.
If you choose to use the same network address translation (NAT) pool for ExpressRoute and for the Internet, you'll
see similar issues with the clients in your network on private IP addresses. Requests for services like Windows
Update go via the Internet because IP addresses for these services are not advertised via ExpressRoute. However,
the return traffic comes back via ExpressRoute. If Microsoft receives an IP address with the same subnet mask from
the Internet and ExpressRoute, it prefers ExpressRoute over the Internet. If a firewall or another stateful device that
is on your network edge and facing ExpressRoute has no prior information about the flow, it drops the packets that
belong to that flow.
Microsoft cloud services deliver hyper-scale services and infrastructure, enterprise-grade capabilities, and many
choices for hybrid connectivity. Customers can choose to access these services either via the Internet or with Azure
ExpressRoute, which provides private network connectivity. The Microsoft Azure platform allows customers to
seamlessly extend their infrastructure into the cloud and build multi-tier architectures. Additionally, third parties
can enable enhanced capabilities by offering security services and virtual appliances. This white paper provides an
overview of security and architectural issues that customers should consider when using Microsoft cloud services
accessed via ExpressRoute. It also covers creating more secure services in Azure virtual networks.
Fast start
The following logic chart can direct you to a specific example of the many security techniques available with the
Azure platform. For quick reference, find the example that best fits your case. For expanded explanations, continue
reading through the paper.
Example 1: Build a perimeter network (also known as DMZ, demilitarized zone, or screened subnet) to help protect
applications with network security groups (NSGs).
Example 2: Build a perimeter network to help protect applications with a firewall and NSGs.
Example 3: Build a perimeter network to help protect networks with a firewall, user-defined route (UDR ), and NSG.
Example 4: Add a hybrid connection with a site-to-site, virtual appliance virtual private network (VPN ).
Example 5: Add a hybrid connection with a site-to-site, Azure VPN gateway.
Example 6: Add a hybrid connection with ExpressRoute.
Examples for adding connections between virtual networks, high availability, and service chaining will be added to
this document over the next few months.
This approach provides a more secure foundation for customers to deploy their services in the Microsoft cloud.
The next step is for customers to design and create a security architecture to protect these services.
Inbound from the Internet, Azure DDoS helps protect against large-scale attacks against Azure. The next layer is
customer-defined public IP addresses (endpoints), which are used to determine which traffic can pass through the
cloud service to the virtual network. Native Azure virtual network isolation ensures complete isolation from all
other networks and that traffic only flows through user configured paths and methods. These paths and methods
are the next layer, where NSGs, UDR, and network virtual appliances can be used to create security boundaries to
protect the application deployments in the protected network.
The next section provides an overview of Azure virtual networks. These virtual networks are created by customers,
and are what their deployed workloads are connected to. Virtual networks are the basis of all the network security
features required to establish a perimeter network to protect customer deployments in Azure.
NOTE
Traffic isolation refers only to traffic inbound to the virtual network. By default outbound traffic from the VNet to the internet
is allowed, but can be prevented if desired by NSGs.
Multi-tier topology: Virtual networks allow customers to define multi-tier topology by allocating subnets and
designating separate address spaces for different elements or “tiers” of their workloads. These logical groupings
and topologies enable customers to define different access policy based on the workload types, and also control
traffic flows between the tiers.
Cross-premises connectivity: Customers can establish cross-premises connectivity between a virtual network
and multiple on-premises sites or other virtual networks in Azure. To construct a connection, customers can use
VNet Peering, Azure VPN Gateways, third-party network virtual appliances, or ExpressRoute. Azure supports
site-to-site (S2S ) VPNs using standard IPsec/IKE protocols and ExpressRoute private connectivity.
NSG allows customers to create rules (ACLs) at the desired level of granularity: network interfaces, individual
VMs, or virtual subnets. Customers can control access by permitting or denying communication between the
workloads within a virtual network, from systems on customer’s networks via cross-premises connectivity, or
direct Internet communication.
UDR and IP Forwarding allow customers to define the communication paths between different tiers within a
virtual network. Customers can deploy a firewall, IDS/IPS, and other virtual appliances, and route network
traffic through these security appliances for security boundary policy enforcement, auditing, and inspection.
Network virtual appliances in the Azure Marketplace: Security appliances such as firewalls, load balancers,
and IDS/IPS are available in the Azure Marketplace and the VM Image Gallery. Customers can deploy these
appliances into their virtual networks, and specifically, at their security boundaries (including the perimeter
network subnets) to complete a multi-tiered secure network environment.
With these features and capabilities, one example of how a perimeter network architecture could be constructed in
Azure is the following diagram:
TIP
Keep the following two groups separate: the individuals authorized to access the perimeter network security gear and the
individuals authorized as application development, deployment, or operations administrators. Keeping these groups separate
allows for a segregation of duties and prevents a single person from bypassing both applications security and network
security controls.
TIP
Use the smallest number of boundaries that satisfy the security requirements for a given situation. With more boundaries,
operations and troubleshooting can be more difficult, as well as the management overhead involved with managing the
multiple boundary policies over time. However, insufficient boundaries increase risk. Finding the balance is critical.
The preceding figure shows a high-level view of a three security boundary network. The boundaries are between
the perimeter network and the Internet, the Azure front-end and back-end private subnets, and the Azure back-end
subnet and the on-premises corporate network.
2) Where are the boundaries located?
Once the number of boundaries are decided, where to implement them is the next decision point. There are
generally three choices:
Using an Internet-based intermediary service (for example, a cloud-based Web application firewall, which is not
discussed in this document)
Using native features and/or network virtual appliances in Azure
Using physical devices on the on-premises network
On purely Azure networks, the options are native Azure features (for example, Azure Load Balancers) or network
virtual appliances from the rich partner ecosystem of Azure (for example, Check Point firewalls).
If a boundary is needed between Azure and an on-premises network, the security devices can reside on either side
of the connection (or both sides). Thus a decision must be made on the location to place security gear.
In the previous figure, the Internet-to-perimeter network and the front-to-back-end boundaries are entirely
contained within Azure, and must be either native Azure features or network virtual appliances. Security devices on
the boundary between Azure (back-end subnet) and the corporate network could be either on the Azure side or the
on-premises side, or even a combination of devices on both sides. There can be significant advantages and
disadvantages to either option that must be seriously considered.
For example, using existing physical security gear on the on-premises network side has the advantage that no new
gear is needed. It just needs reconfiguration. The disadvantage, however, is that all traffic must come back from
Azure to the on-premises network to be seen by the security gear. Thus Azure-to-Azure traffic could incur
significant latency, and affect application performance and user experience, if it was forced back to the on-premises
network for security policy enforcement.
3) How are the boundaries implemented?
Each security boundary will likely have different capability requirements (for example, IDS and firewall rules on the
Internet side of the perimeter network, but only ACLs between the perimeter network and back-end subnet).
Deciding on which device (or how many devices) to use depends on the scenario and security requirements. In the
following section, examples 1, 2, and 3 discuss some options that could be used. Reviewing the Azure native
network features and the devices available in Azure from the partner ecosystem shows the myriad options
available to solve virtually any scenario.
Another key implementation decision point is how to connect the on-premises network with Azure. Should you
use the Azure virtual gateway or a network virtual appliance? These options are discussed in greater detail in the
following section (examples 4, 5, and 6).
Additionally, traffic between virtual networks within Azure may be needed. These scenarios will be added in the
future.
Once you know the answers to the previous questions, the Fast Start section can help identify which examples are
most appropriate for a given scenario.
Environment description
In this example, there is a subscription that contains the following resources:
A single resource group
A virtual network with two subnets: “FrontEnd” and “BackEnd”
A Network Security Group that is applied to both subnets
A Windows server that represents an application web server (“IIS01”)
Two Windows servers that represent application back-end servers (“AppVM01”, “AppVM02”)
A Windows server that represents a DNS server (“DNS01”)
A public IP associated with the application web server
For scripts and an Azure Resource Manager template, see the detailed build instructions.
NSG description
In this example, an NSG group is built and then loaded with six rules.
TIP
Generally speaking, you should create your specific “Allow” rules first, followed by the more generic “Deny” rules. The given
priority dictates which rules are evaluated first. Once traffic is found to apply to a specific rule, no further rules are evaluated.
NSG rules can apply in either the inbound or outbound direction (from the perspective of the subnet).
Declaratively, the following rules are being built for inbound traffic:
1. Internal DNS traffic (port 53) is allowed.
2. RDP traffic (port 3389) from the Internet to any Virtual Machine is allowed.
3. HTTP traffic (port 80) from the Internet to web server (IIS01) is allowed.
4. Any traffic (all ports) from IIS01 to AppVM1 is allowed.
5. Any traffic (all ports) from the Internet to the entire virtual network (both subnets) is denied.
6. Any traffic (all ports) from the front-end subnet to the back-end subnet is denied.
With these rules bound to each subnet, if an HTTP request was inbound from the Internet to the web server, both
rules 3 (allow ) and 5 (deny) would apply. But because rule 3 has a higher priority, only it would apply, and rule 5
would not come into play. Thus the HTTP request would be allowed to the web server. If that same traffic was
trying to reach the DNS01 server, rule 5 (deny) would be the first to apply, and the traffic would not be allowed to
pass to the server. Rule 6 (deny) blocks the front-end subnet from talking to the back-end subnet (except for
allowed traffic in rules 1 and 4). This rule-set protects the back-end network in case an attacker compromises the
web application on the front end. The attacker would have limited access to the back-end “protected” network (only
to resources exposed on the AppVM01 server).
There is a default outbound rule that allows traffic out to the Internet. For this example, we’re allowing outbound
traffic and not modifying any outbound rules. To lock down traffic in both directions, user-defined routing is
required (see example 3).
Conclusion
This example is a relatively simple and straightforward way of isolating the back-end subnet from inbound traffic.
For more information, see the detailed build instructions. These instructions include:
How to build this perimeter network with classic PowerShell scripts.
How to build this perimeter network with an Azure Resource Manager template.
Detailed descriptions of each NSG command.
Detailed traffic flow scenarios, showing how traffic is allowed or denied in each layer.
Example 2 Build a perimeter network to help protect applications with a firewall and NSGs
Back to Fast start | Detailed build instructions for this example
Environment description
In this example, there is a subscription that contains the following resources:
A single resource group
A virtual network with two subnets: “FrontEnd” and “BackEnd”
A Network Security Group that is applied to both subnets
A network virtual appliance, in this case a firewall, connected to the front-end subnet
A Windows server that represents an application web server (“IIS01”)
Two Windows servers that represent application back-end servers (“AppVM01”, “AppVM02”)
A Windows server that represents a DNS server (“DNS01”)
For scripts and an Azure Resource Manager template, see the detailed build instructions.
NSG description
In this example, an NSG group is built and then loaded with six rules.
TIP
Generally speaking, you should create your specific “Allow” rules first, followed by the more generic “Deny” rules. The given
priority dictates which rules are evaluated first. Once traffic is found to apply to a specific rule, no further rules are evaluated.
NSG rules can apply in either the inbound or outbound direction (from the perspective of the subnet).
Declaratively, the following rules are being built for inbound traffic:
1. Internal DNS traffic (port 53) is allowed.
2. RDP traffic (port 3389) from the Internet to any Virtual Machine is allowed.
3. Any Internet traffic (all ports) to the network virtual appliance (firewall) is allowed.
4. Any traffic (all ports) from IIS01 to AppVM1 is allowed.
5. Any traffic (all ports) from the Internet to the entire virtual network (both subnets) is denied.
6. Any traffic (all ports) from the front-end subnet to the back-end subnet is denied.
With these rules bound to each subnet, if an HTTP request was inbound from the Internet to the firewall, both rules
3 (allow ) and 5 (deny) would apply. But because rule 3 has a higher priority, only it would apply, and rule 5 would
not come into play. Thus the HTTP request would be allowed to the firewall. If that same traffic was trying to reach
the IIS01 server, even though it’s on the front-end subnet, rule 5 (deny) would apply, and the traffic would not be
allowed to pass to the server. Rule 6 (deny) blocks the front-end subnet from talking to the back-end subnet (except
for allowed traffic in rules 1 and 4). This rule-set protects the back-end network in case an attacker compromises
the web application on the front end. The attacker would have limited access to the back-end “protected” network
(only to resources exposed on the AppVM01 server).
There is a default outbound rule that allows traffic out to the Internet. For this example, we’re allowing outbound
traffic and not modifying any outbound rules. To lock down traffic in both directions, user-defined routing is
required (see example 3).
Firewall rule description
On the firewall, forwarding rules should be created. Since this example only routes Internet traffic in-bound to the
firewall and then to the web server, only one forwarding network address translation (NAT) rule is needed.
The forwarding rule accepts any inbound source address that hits the firewall trying to reach HTTP (port 80 or 443
for HTTPS ). It's sent out of the firewall’s local interface and redirected to the web server with the IP Address of
10.0.1.5.
Conclusion
This example is a relatively straightforward way of protecting your application with a firewall and isolating the
back-end subnet from inbound traffic. For more information, see the detailed build instructions. These instructions
include:
How to build this perimeter network with classic PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed descriptions of each NSG command and firewall rule.
Detailed traffic flow scenarios, showing how traffic is allowed or denied in each layer.
Example 3 Build a perimeter network to help protect networks with a firewall and UDR and NSG
Back to Fast start | Detailed build instructions for this example
Environment description
In this example, there is a subscription that contains the following resources:
A single resource group
A virtual network with three subnets: “SecNet”, “FrontEnd”, and “BackEnd”
A network virtual appliance, in this case a firewall, connected to the SecNet subnet
A Windows server that represents an application web server (“IIS01”)
Two Windows servers that represent application back-end servers (“AppVM01”, “AppVM02”)
A Windows server that represents a DNS server (“DNS01”)
For scripts and an Azure Resource Manager template, see the detailed build instructions.
UDR description
By default, the following system routes are defined as:
Effective routes :
Address Prefix Next hop type Next hop IP address Status Source
-------------- ------------- ------------------- ------ ------
{10.0.0.0/16} VNETLocal Active Default
{0.0.0.0/0} Internet Active Default
{10.0.0.0/8} Null Active Default
{100.64.0.0/10} Null Active Default
{172.16.0.0/12} Null Active Default
{192.168.0.0/16} Null Active Default
The VNETLocal is always one or more defined address prefixes that make up the virtual network for that specific
network (that is, it changes from virtual network to virtual network, depending on how each specific virtual
network is defined). The remaining system routes are static and default as indicated in the table.
In this example, two routing tables are created, one each for the front-end and back-end subnets. Each table is
loaded with static routes appropriate for the given subnet. In this example, each table has three routes that direct all
traffic (0.0.0.0/0) through the firewall (Next hop = Virtual Appliance IP address):
1. Local subnet traffic with no Next Hop defined to allow local subnet traffic to bypass the firewall.
2. Virtual network traffic with a Next Hop defined as firewall. This next hop overrides the default rule that allows
local virtual network traffic to route directly.
3. All remaining traffic (0/0) with a Next Hop defined as the firewall.
TIP
Not having the local subnet entry in the UDR breaks local subnet communications.
In our example, 10.0.1.0/24 pointing to VNETLocal is critical! Without it, packet leaving the Web Server (10.0.1.4) destined
to another local server (for example) 10.0.1.25 will fail as they will be sent to the NVA. The NVA will send it to the subnet,
and the subnet will resend it to the NVA in an infinite loop.
The chances of a routing loop are typically higher on appliances with multiple NICs that are connected to separate
subnets, which is often of traditional, on-premises appliances.
Once the routing tables are created, they must be bound to their subnets. The front-end subnet routing table, once
created and bound to the subnet, would look like this output:
Effective routes :
Address Prefix Next hop type Next hop IP address Status Source
-------------- ------------- ------------------- ------ ------
{10.0.1.0/24} VNETLocal Active
{10.0.0.0/16} VirtualAppliance 10.0.0.4 Active
{0.0.0.0/0} VirtualAppliance 10.0.0.4 Active
NOTE
UDR can now be applied to the gateway subnet on which the ExpressRoute circuit is connected.
Examples of how to enable your perimeter network with ExpressRoute or site-to-site networking are shown in examples 3
and 4.
IP Forwarding description
IP Forwarding is a companion feature to UDR. IP Forwarding is a setting on a virtual appliance that allows it to
receive traffic not specifically addressed to the appliance, and then forward that traffic to its ultimate destination.
For example, if AppVM01 makes a request to the DNS01 server, UDR would route this traffic to the firewall. With
IP Forwarding enabled, the traffic for the DNS01 destination (10.0.2.4) is accepted by the appliance (10.0.0.4) and
then forwarded to its ultimate destination (10.0.2.4). Without IP Forwarding enabled on the firewall, traffic would
not be accepted by the appliance even though the route table has the firewall as the next hop. To use a virtual
appliance, it’s critical to remember to enable IP Forwarding along with UDR.
NSG description
In this example, an NSG group is built and then loaded with a single rule. This group is then bound only to the
front-end and back-end subnets (not the SecNet). Declaratively the following rule is being built:
Any traffic (all ports) from the Internet to the entire virtual network (all subnets) is denied.
Although NSGs are used in this example, its main purpose is as a secondary layer of defense against manual
misconfiguration. The goal is to block all inbound traffic from the Internet to either the front-end or back-end
subnets. Traffic should only flow through the SecNet subnet to the firewall (and then, if appropriate, on to the front-
end or back-end subnets). Plus, with the UDR rules in place, any traffic that did make it into the front-end or back-
end subnets would be directed out to the firewall (thanks to UDR ). The firewall would see this traffic as an
asymmetric flow and would drop the outbound traffic. Thus there are three layers of security protecting the
subnets:
No Public IP addresses on any FrontEnd or BackEnd NICs.
NSGs denying traffic from the Internet.
The firewall dropping asymmetric traffic.
One interesting point regarding the NSG in this example is that it contains only one rule, which is to deny Internet
traffic to the entire virtual network, including the Security subnet. However, since the NSG is only bound to the
front-end and back-end subnets, the rule isn’t processed on traffic inbound to the Security subnet. As a result,
traffic flows to the Security subnet.
Firewall rules
On the firewall, forwarding rules should be created. Since the firewall is blocking or forwarding all inbound,
outbound, and intra-virtual network traffic, many firewall rules are needed. Also, all inbound traffic hits the Security
Service public IP address (on different ports), to be processed by the firewall. A best practice is to diagram the
logical flows before setting up the subnets and firewall rules, to avoid rework later. The following figure is a logical
view of the firewall rules for this example:
NOTE
Based on the Network Virtual Appliance used, the management ports vary. In this example, a Barracuda NextGen Firewall is
referenced, which uses ports 22, 801, and 807. Consult the appliance vendor documentation to find the exact ports used for
management of the device being used.
TIP
On the second application traffic rule, to simplify this example, any port is allowed. In a real scenario, the most specific port
and address ranges should be used to reduce the attack surface of this rule.
Once the previous rules are created, it’s important to review the priority of each rule to ensure traffic is allowed or
denied as desired. For this example, the rules are in priority order.
Conclusion
This example is a more complex but complete way of protecting and isolating the network than the previous
examples. (Example 2 protects just the application, and Example 1 just isolates subnets). This design allows for
monitoring traffic in both directions, and protects not just the inbound application server but enforces network
security policy for all servers on this network. Also, depending on the appliance used, full traffic auditing and
awareness can be achieved. For more information, see the detailed build instructions. These instructions include:
How to build this example perimeter network with classic PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed descriptions of each UDR, NSG command, and firewall rule.
Detailed traffic flow scenarios, showing how traffic is allowed or denied in each layer.
Example 4 Add a hybrid connection with a site -to -site, virtual appliance VPN
Back to Fast start | Detailed build instructions available soon
Environment description
Hybrid networking using a network virtual appliance (NVA) can be added to any of the perimeter network types
described in examples 1, 2, or 3.
As shown in the previous figure, a VPN connection over the Internet (site-to-site) is used to connect an on-
premises network to an Azure virtual network via an NVA.
NOTE
If you use ExpressRoute with the Azure Public Peering option enabled, a static route should be created. This static route
should route to the NVA VPN IP address out your corporate Internet and not via the ExpressRoute connection. The NAT
required on the ExpressRoute Azure Public Peering option can break the VPN session.
Once the VPN is in place, the NVA becomes the central hub for all networks and subnets. The firewall forwarding
rules determine which traffic flows are allowed, are translated via NAT, are redirected, or are dropped (even for
traffic flows between the on-premises network and Azure).
Traffic flows should be considered carefully, as they can be optimized or degraded by this design pattern,
depending on the specific use case.
Using the environment built in example 3, and then adding a site-to-site VPN hybrid network connection, produces
the following design:
The on-premises router, or any other network device that is compatible with your NVA for VPN, would be the VPN
client. This physical device would be responsible for initiating and maintaining the VPN connection with your NVA.
Logically to the NVA, the network looks like four separate “security zones” with the rules on the NVA being the
primary director of traffic between these zones:
Conclusion
The addition of a site-to-site VPN hybrid network connection to an Azure virtual network can extend the on-
premises network into Azure in a secure manner. In using a VPN connection, your traffic is encrypted and routes
via the Internet. The NVA in this example provides a central location to enforce and manage the security policy. For
more information, see the detailed build instructions (forthcoming). These instructions include:
How to build this example perimeter network with PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed traffic flow scenarios, showing how traffic flows through this design.
Example 5 Add a hybrid connection with a site -to -site, Azure VPN gateway
Back to Fast start | Detailed build instructions available soon
Environment description
Hybrid networking using an Azure VPN gateway can be added to either perimeter network type described in
examples 1 or 2.
As shown in the preceding figure, a VPN connection over the Internet (site-to-site) is used to connect an on-
premises network to an Azure virtual network via an Azure VPN gateway.
NOTE
If you use ExpressRoute with the Azure Public Peering option enabled, a static route should be created. This static route
should route to the NVA VPN IP address out your corporate Internet and not via the ExpressRoute WAN. The NAT required
on the ExpressRoute Azure Public Peering option can break the VPN session.
The following figure shows the two network edges in this example. On the first edge, the NVA and NSGs control
traffic flows for intra-Azure networks and between Azure and the Internet. The second edge is the Azure VPN
gateway, which is a separate and isolated network edge between on-premises and Azure.
Traffic flows should be considered carefully, as they can be optimized or degraded by this design pattern,
depending on the specific use case.
Using the environment built in example 1, and then adding a site-to-site VPN hybrid network connection, produces
the following design:
Conclusion
The addition of a site-to-site VPN hybrid network connection to an Azure virtual network can extend the on-
premises network into Azure in a secure manner. Using the native Azure VPN gateway, your traffic is IPSec
encrypted and routes via the Internet. Also, using the Azure VPN gateway can provide a lower-cost option (no
additional licensing cost as with third-party NVAs). This option is most economical in example 1, where no NVA is
used. For more information, see the detailed build instructions (forthcoming). These instructions include:
How to build this example perimeter network with PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed traffic flow scenarios, showing how traffic flows through this design.
Example 6 Add a hybrid connection with ExpressRoute
Back to Fast start | Detailed build instructions available soon
Environment description
Hybrid networking using an ExpressRoute private peering connection can be added to either perimeter network
type described in examples 1 or 2.
As shown in the preceding figure, ExpressRoute private peering provides a direct connection between your on-
premises network and the Azure virtual network. Traffic transits only the service provider network and the
Microsoft Azure network, never touching the Internet.
TIP
Using ExpressRoute keeps corporate network traffic off the Internet. It also allows for service level agreements from your
ExpressRoute provider. The Azure Gateway can pass up to 10 Gbps with ExpressRoute, whereas with site-to-site VPNs, the
Azure Gateway maximum throughput is 200 Mbps.
As seen in the following diagram, with this option the environment now has two network edges. The NVA and
NSG control traffic flows for intra-Azure networks and between Azure and the Internet, while the gateway is a
separate and isolated network edge between on-premises and Azure.
Traffic flows should be considered carefully, as they can be optimized or degraded by this design pattern,
depending on the specific use case.
Using the environment built in example 1, and then adding an ExpressRoute hybrid network connection, produces
the following design:
Conclusion
The addition of an ExpressRoute Private Peering network connection can extend the on-premises network into
Azure in a secure, lower latency, higher performing manner. Also, using the native Azure Gateway, as in this
example, provides a lower-cost option (no additional licensing as with third-party NVAs). For more information,
see the detailed build instructions (forthcoming). These instructions include:
How to build this example perimeter network with PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed traffic flow scenarios, showing how traffic flows through this design.
References
Helpful websites and documentation
Access Azure with Azure Resource Manager:
Accessing Azure with PowerShell: https://docs.microsoft.com/powershell/azureps-cmdlets-docs/
Virtual networking documentation: https://docs.microsoft.com/azure/virtual-network/
Network security group documentation: https://docs.microsoft.com/azure/virtual-network/virtual-networks-
nsg
User-defined routing documentation: https://docs.microsoft.com/azure/virtual-network/virtual-networks-udr-
overview
Azure virtual gateways: https://docs.microsoft.com/azure/vpn-gateway/
Site-to-Site VPNs: https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-create-site-to-site-rm-
powershell
ExpressRoute documentation (be sure to check out the “Getting Started” and “How To” sections):
https://docs.microsoft.com/azure/expressroute/
Azure Network Security Best Practices
5/21/2018 • 17 minutes to read • Edit Online
Microsoft Azure enables you to connect virtual machines and appliances to other networked devices by placing
them on Azure Virtual Networks. An Azure Virtual Network is a construct that allows you to connect virtual
network interface cards to a virtual network to allow TCP/IP -based communications between network enabled
devices. Azure Virtual Machines connected to an Azure Virtual Network can connect to devices on the same Azure
Virtual Network, different Azure Virtual Networks, on the Internet or even on your own on-premises networks.
This article discusses a collection of Azure network security best practices. These best practices are derived from
our experience with Azure networking and the experiences of customers like yourself.
For each best practice, this article explains:
What the best practice is
Why you want to enable that best practice
What might be the result if you fail to enable the best practice
Possible alternatives to the best practice
How you can learn to enable the best practice
This Azure Network Security Best Practices article is based on a consensus opinion, and Azure platform capabilities
and feature sets, as they exist at the time this article was written. Opinions and technologies change over time and
this article will be updated on a regular basis to reflect those changes.
Azure Network security best practices discussed in this article include:
Logically segment subnets
Control routing behavior
Enable Forced Tunneling
Use Virtual network appliances
Deploy DMZs for security zoning
Avoid exposure to the Internet with dedicated WAN links
Optimize uptime and performance
Use global load balancing
Disable RDP Access to Azure Virtual Machines
Enable Azure Security Center
Extend your datacenter into Azure
NOTE
User Defined Routes are not required and the default system routes works in most instances.
You can learn more about User-Defined Routes and how to configure them by reading the article What are User
Defined Routes and IP Forwarding.
Introduction
Microsoft Azure provides multiple services for managing how network traffic is distributed and load balanced. You
can use these services individually or combine their methods, depending on your needs, to build the optimal
solution.
In this tutorial, we first define a customer use case and see how it can be made more robust and performant by
using the following Azure load-balancing portfolio: Traffic Manager, Application Gateway, and Load Balancer. We
then provide step-by-step instructions for creating a deployment that is geographically redundant, distributes
traffic to VMs, and helps you manage different types of requests.
At a conceptual level, each of these services plays a distinct role in the load-balancing hierarchy.
Traffic Manager provides global DNS load balancing. It looks at incoming DNS requests and responds
with a healthy endpoint, in accordance with the routing policy the customer has selected. Options for routing
methods are:
Performance routing to send the requestor to the closest endpoint in terms of latency.
Priority routing to direct all traffic to an endpoint, with other endpoints as backup.
Weighted round-robin routing, which distributes traffic based on the weighting that is assigned to each
endpoint.
The client connects directly to that endpoint. Azure Traffic Manager detects when an endpoint is unhealthy
and then redirects the clients to another healthy instance. Refer to Azure Traffic Manager documentation to
learn more about the service.
Application Gateway provides application delivery controller (ADC ) as a service, offering various Layer 7
load-balancing capabilities for your application. It allows customers to optimize web farm productivity by
offloading CPU -intensive SSL termination to the application gateway. Other Layer 7 routing capabilities include
round-robin distribution of incoming traffic, cookie-based session affinity, URL path-based routing, and the
ability to host multiple websites behind a single application gateway. Application Gateway can be configured as
an Internet-facing gateway, an internal-only gateway, or a combination of both. Application Gateway is fully
Azure managed, scalable, and highly available. It provides a rich set of diagnostics and logging capabilities for
better manageability.
Load Balancer is an integral part of the Azure SDN stack, providing high-performance, low -latency Layer 4
load-balancing services for all UDP and TCP protocols. It manages inbound and outbound connections. You can
configure public and internal load-balanced endpoints and define rules to map inbound connections to back-
end pool destinations by using TCP and HTTP health-probing options to manage service availability.
Scenario
In this example scenario, we use a simple website that serves two types of content: images and dynamically
rendered webpages. The website must be geographically redundant, and it should serve its users from the closest
(lowest latency) location to them. The application developer has decided that any URLs that match the pattern
/images/* are served from a dedicated pool of VMs that are different from the rest of the web farm.
Additionally, the default VM pool serving the dynamic content needs to talk to a back-end database that is hosted
on a high-availability cluster. The entire deployment is set up through Azure Resource Manager.
Using Traffic Manager, Application Gateway, and Load Balancer allows this website to achieve these design goals:
Multi-geo redundancy: If one region goes down, Traffic Manager routes traffic seamlessly to the closest
region without any intervention from the application owner.
Reduced latency: Because Traffic Manager automatically directs the customer to the closest region, the
customer experiences lower latency when requesting the webpage contents.
Independent scalability: Because the web application workload is separated by type of content, the
application owner can scale the request workloads independent of each other. Application Gateway ensures that
the traffic is routed to the right pools based on the specified rules and the health of the application.
Internal load balancing: Because Load Balancer is in front of the high-availability cluster, only the active and
healthy endpoint for a database is exposed to the application. Additionally, a database administrator can
optimize the workload by distributing active and passive replicas across the cluster independent of the front-end
application. Load Balancer delivers connections to the high-availability cluster and ensures that only healthy
databases receive connection requests.
The following diagram shows the architecture of this scenario:
NOTE
This example is only one of many possible configurations of the load-balancing services that Azure offers. Traffic Manager,
Application Gateway, and Load Balancer can be mixed and matched to best suit your load-balancing needs. For example, if
SSL offload or Layer 7 processing is not necessary, Load Balancer can be used in place of Application Gateway.
1. From your resource group, go to the instance of the application gateway that you created in the preceding
section.
2. Under Settings, select Backend pools, and then select Add to add the VMs that you want to associate with the
web-tier back-end pools.
3. Enter the name of the back-end pool and all the IP addresses of the machines that reside in the pool. In this
scenario, we are connecting two back-end server pools of virtual machines.
4. Under Settings of the application gateway, select Rules, and then click the Path based button to add a rule.
5. Configure the rule by providing the following information.
Basic settings:
Name: The friendly name of the rule that is accessible in the portal.
Listener: The listener that is used for the rule.
Default backend pool: The back-end pool to be used with the default rule.
Default HTTP settings: The HTTP settings to be used with the default rule.
Path-based rules:
Name: The friendly name of the path-based rule.
Paths: The path rule that is used for forwarding traffic.
Backend Pool: The back-end pool to be used with this rule.
HTTP Setting: The HTTP settings to be used with this rule.
IMPORTANT
Paths: Valid paths must start with "/". The wildcard "*" is allowed only at the end. Valid examples are /xyz, /xyz*, or
/xyz/*.
Step 3: Add application gateways to the Traffic Manager endpoints
In this scenario, Traffic Manager is connected to application gateways (as configured in the preceding steps) that
reside in different regions. Now that the application gateways are configured, the next step is to connect them to
your Traffic Manager profile.
1. Open your Traffic Manager profile. To do so, look in your resource group or search for the name of the Traffic
Manager profile from All Resources.
2. In the left pane, select Endpoints, and then click Add to add an endpoint.
Next steps
Overview of Traffic Manager
Application Gateway overview
Azure Load Balancer overview
Disaster recovery using Azure DNS and Traffic
Manager
6/8/2018 • 10 minutes to read • Edit Online
Disaster recovery focuses on recovering from a severe loss of application functionality. In order to choose a
disaster recovery solution, business and technology owners must first determine the level of functionality that is
required during a disaster, such as - unavailable, partially available via reduced functionality, or delayed availability,
or fully available. Most enterprise customers are choosing a multi-region architecture for resiliency against an
application or infrastructure level failover. Customers can choose several approaches in the quest to achieve
failover and high availability via redundant architecture. Here are some of the popular approaches:
Active-passive with cold standby: In this failover solution, the VMs and other appliances that running in
the standby region are not active until there is a need for failover. However, the production environment is
replicated in the form of backups, VM images, or Resource Manager templates, to a different region. This
failover mechanism is cost-effective but takes a longer time to undertake a complete failover.
This step can be executed manually or via automation. It can be done manually via the console or by the Azure CLI.
The Azure SDK and API can be used to automate the CNAME update so that no manual intervention is required.
Automation can be built via Azure functions or within a third-party monitoring application or even from on-
premises.
How manual failover works using Azure DNS
Since the DNS server is outside the failover or disaster zone, it is insulated against any downtime. This enables
user to architect a simple failover scenario that is cost effective and will work all the time assuming that the
operator has network connectivity during disaster and can make the flip. If the solution is scripted, then one must
ensure that the server or service running the script should be insulated against the problem affecting the
production environment. Also, keep in mind the low TTL that was set against the zone so that no resolver around
the world keeps the endpoint cached for long and customers can access the site within the RTO. For a cold standby
and pilot light, since some prewarming and other administrative activity may be required – one should also give
enough time before making the flip.
Next steps
Learn more about Azure Traffic Manager.
Learn more about Azure DNS.
Plan virtual networks
5/30/2018 • 10 minutes to read • Edit Online
Creating a virtual network to experiment with is easy enough, but chances are, you will deploy multiple virtual
networks over time to support the production needs of your organization. With some planning, you will be able to
deploy virtual networks and connect the resources you need more effectively. The information in this article is most
helpful if you're already familiar with virtual networks and have some experience working with them. If you are not
familiar with virtual networks, it's recommended that you read Virtual network overview.
Naming
All Azure resources have a name. The name must be unique within a scope, that may vary for each resource type.
For example, the name of a virtual network must be unique within a resource group, but can be duplicated within a
subscription or Azure region. Defining a naming convention that you can use consistently when naming resources
is helpful when managing several network resources over time. For suggestions, see Naming conventions.
Regions
All Azure resources are created in an Azure region and subscription. A resource can only be created in a virtual
network that exists in the same region and subscription as the resource. You can however, connect virtual networks
that exist in different subscriptions and regions. For more information, see connectivity. When deciding which
region(s) to deploy resources in, consider where consumers of the resources are physically located:
Consumers of resources typically want the lowest network latency to their resources. To determine relative
latencies between a specified location and Azure regions, see View relative latencies.
Do you have data residency, sovereignty, compliance, or resiliency requirements? If so, choosing the region that
aligns to the requirements is critical. For more information, see Azure geographies.
Do you require resiliency across Azure Availability Zones within the same Azure region for the resources you
deploy? You can deploy resources, such as virtual machines (VM ) to different availability zones within the same
virtual network. Not all Azure regions support availability zones however. To learn more about availability zones
and the regions that support them, see Availability zones.
Subscriptions
You can deploy as many virtual networks as required within each subscription, up to the limit. Some organizations
have different subscriptions for different departments, for example. For more information and considerations
around subscriptions, see Subscription governance.
Segmentation
You can create multiple virtual networks per subscription and per region. You can create multiple subnets within
each virtual network. The considerations that follow help you determine how many virtual networks and subnets
you require:
Virtual networks
A virtual network is a virtual, isolated portion of the Azure public network. Each virtual network is dedicated to
your subscription. Things to consider when deciding whether to create one virtual network, or multiple virtual
networks in a subscription:
Do any organizational security requirements exist for isolating traffic into separate virtual networks? You can
choose to connect virtual networks or not. If you connect virtual networks, you can implement a network virtual
appliance, such as a firewall, to control the flow of traffic between the virtual networks. For more information,
see security and connectivity.
Do any organizational requirements exist for isolating virtual networks into separate subscriptions or regions?
A network interface enables a VM to communicate with other resources. Each network interface has one or
more private IP addresses assigned to it. How many network interfaces and private IP addresses do you require
in a virtual network? There are limits to the number of network interfaces and private IP addresses that you can
have within a virtual network.
Do you want to connect the virtual network to another virtual network or on-premises network? You may
choose to connect some virtual networks to each other or on-premises networks, but not others. For more
information, see connectivity. Each virtual network that you connect to another virtual network, or on-premises
network, must have a unique address space. Each virtual network has one or more public or private address
ranges assigned to its address space. An address range is specified in classless internet domain routing (CIDR )
format, such as 10.0.0.0/16. Learn more about address ranges for virtual networks.
Do you have any organizational administration requirements for resources in different virtual networks? If so,
you might separate resources into separate virtual network to simplify permission assignment to individuals in
your organization or to assign different policies to different virtual networks.
When you deploy some Azure service resources into a virtual network, they create their own virtual network. To
determine whether an Azure service creates its own virtual network, see information for each Azure service that
can be deployed into a virtual network.
Subnets
A virtual network can be segmented into one or more subnets up to the limits. Things to consider when deciding
whether to create one subnet, or multiple virtual networks in a subscription:
Each subnet must have a unique address range, specified in CIDR format, within the address space of the virtual
network. The address range cannot overlap with other subnets in the virtual network.
If you plan to deploy some Azure service resources into a virtual network, they may require, or create, their own
subnet, so there must be enough unallocated space for them to do so. To determine whether an Azure service
creates its own subnet, see information for each Azure service that can be deployed into a virtual network. For
example, if you connect a virtual network to an on-premises network using an Azure VPN Gateway, the virtual
network must have a dedicated subnet for the gateway. Learn more about gateway subnets.
Azure routes network traffic between all subnets in a virtual network, by default. You can override Azure's
default routing to prevent Azure routing between subnets, or to route traffic between subnets through a
network virtual appliance, for example. If you require that traffic between resources in the same virtual network
flow through a network virtual appliance (NVA), deploy the resources to different subnets. Learn more in
security.
You can limit access to Azure resources such as an Azure storage account or Azure SQL database, to specific
subnets with a virtual network service endpoint. Further, you can deny access to the resources from the internet.
You may create multiple subnets, and enable a service endpoint for some subnets, but not others. Learn more
about service endpoints, and the Azure resources you can enable them for.
You can associate zero or one network security group to each subnet in a virtual network. You can associate the
same, or a different, network security group to each subnet. Each network security group contains rules, which
allow or deny traffic to and from sources and destinations. Learn more about network security groups.
Security
You can filter network traffic to and from resources in a virtual network using network security groups and network
virtual appliances. You can control how Azure routes traffic from subnets. You can also limit who in your
organization can work with resources in virtual networks.
Traffic filtering
You can filter network traffic between resources in a virtual network using a network security group, an NVA
that filters network traffic, or both. To deploy an NVA, such as a firewall, to filter network traffic, see the Azure
Marketplace. When using an NVA, you also create custom routes to route traffic from subnets to the NVA.
Learn more about traffic routing.
A network security group contains several default security rules that allow or deny traffic to or from resources.
A network security group can be associated to a network interface, the subnet the network interface is in, or
both. To simplify management of security rules, it's recommended that you associate a network security group
to individual subnets, rather than individual network interfaces within the subnet, whenever possible.
If different VMs within a subnet need different security rules applied to them, you can associate the network
interface in the VM to one or more application security groups. A security rule can specify an application
security group in its source, destination, or both. That rule then only applies to the network interfaces that are
members of the application security group. Learn more about network security groups and application security
groups.
Azure creates several default security rules within each network security group. One default rule allows all traffic
to flow between all resources in a virtual network. To override this behavior, use network security groups,
custom routing to route traffic to an NVA, or both. It's recommended that you familiarize yourself with all of
Azure's default security rules and understand how network security group rules are applied to a resource.
You can view sample designs for implementing a DMZ between Azure and the internet using an NVA or network
security groups.
Traffic routing
Azure creates several default routes for outbound traffic from a subnet. You can override Azure's default routing by
creating a route table and associating it to a subnet. Common reasons for overriding Azure's default routing are:
Because you want traffic between subnets to flow through an NVA. To learn more about how to configure route
tables to force traffic through an NVA.
Because you want to force all internet-bound traffic through an NVA, or on-premises, through an Azure VPN
gateway. Forcing internet traffic on-premises for inspection and logging is often referred to as forced tunneling.
Learn more about how to configure forced tunneling.
If you need to implement custom routing, it's recommended that you familiarize yourself with routing in Azure.
Connectivity
You can connect a virtual network to other virtual networks using virtual network peering, or to your on-premises
network, using an Azure VPN gateway.
Peering
When using virtual network peering, the virtual networks can be in the same, or different, supported Azure
regions. The virtual networks can be in the same, or different Azure subscriptions, as long as both subscriptions are
assigned to the same Azure Active Directory tenant. Before creating a peering, it's recommended that you
familiarize yourself with all of the peering requirements and constraints. Bandwidth between resources in virtual
networks peered in the same region is the same as if the resources were in the same virtual network.
VPN gateway
You can use an Azure VPN Gateway to connect a virtual network to your on-premises network using a site-to-site
VPN, or using a dedicated connection with Azure ExpressRoute.
You can combine peering and a VPN gateway to create hub and spoke networks, where spoke virtual networks
connect to a hub virtual network, and the hub connects to an on-premises network, for example.
Name resolution
Resources in one virtual network cannot resolve the names of resources in a peered virtual network using Azure's
built-in DNS. To resolve names in a peered virtual network, deploy your own DNS server, or use Azure DNS
private domains. Resolving names between resources in a virtual network and on-premises networks also requires
you to deploy your own DNS server.
Permissions
Azure utilizes role based access control (RBAC ) to resources. Permissions are assigned to a scope in the following
hierarchy: Subscription, management group, resource group, and individual resource. To learn more about the
hierarchy, see Organize your resources. To work with Azure virtual networks and all of their related capabilities
such as peering, network security groups, service endpoints, and route tables, you can assign members of your
organization to the built-in Owner, Contributor, or Network contributor roles, and then assign the role to the
appropriate scope. If you want to assign specific permissions for a subset of virtual network capabilities, create a
custom role and assign the specific permissions required for virtual networks, subnets and service endpoints,
network interfaces, peering, network and application security groups, or route tables to the role.
Policy
Azure Policy enables you to create, assign, and manage policy definitions. Policy definitions enforce different rules
over your resources, so the resources stay compliant with your organizational standards and service level
agreements. Azure Policy runs an evaluation of your resources, scanning for resources that are not compliant with
the policy definitions you have. For example, you can define and apply a policy that allows creation of virtual
networks in only a specific resource group or region. Another policy can require that every subnet has a network
security group associated to it. The policies are then evaluated when creating and updating resources.
Policies are applied to the following hierarchy: Subscription, management group, and resource group. Learn more
about Azure policy or deploy some virtual network policy template samples.
Next steps
Learn about all tasks, settings, and options for a virtual network, subnet and service endpoint, network interface,
peering, network and application security group, or route table.
Planning and design for VPN Gateway
8/15/2017 • 8 minutes to read • Edit Online
Planning and designing your cross-premises and VNet-to-VNet configurations can be either simple, or
complicated, depending on your networking needs. This article walks you through basic planning and design
considerations.
Planning
Cross-premises connectivity options
If you want to connect your on-premises sites securely to a virtual network, you have three different ways to do so:
Site-to-Site, Point-to-Site, and ExpressRoute. Compare the different cross-premises connections that are available.
The option you choose can depend on various considerations, such as:
What kind of throughput does your solution require?
Do you want to communicate over the public Internet via secure VPN, or over a private connection?
Do you have a public IP address available to use?
Are you planning to use a VPN device? If so, is it compatible?
Are you connecting just a few computers, or do you want a persistent connection for your site?
What type of VPN gateway is required for the solution you want to create?
Which gateway SKU should you use?
Planning table
The following table can help you decide the best connectivity option for your solution.
Azure Supported Services Cloud Services and Virtual Cloud Services and Virtual Services list
Machines Machines
Typical Bandwidths Based on the gateway SKU Typically < 1 Gbps 50 Mbps, 100 Mbps, 200
aggregate Mbps, 500 Mbps, 1 Gbps, 2
Gbps, 5 Gbps, 10 Gbps
Typical use case Prototyping, dev / test / lab Dev / test / lab scenarios Access to all Azure services
scenarios for cloud services and small scale production (validated list), Enterprise-
and virtual machines workloads for cloud services class and mission critical
and virtual machines workloads, Backup, Big Data,
Azure as a DR site
POINT-TO-SITE SITE-TO-SITE EXPRESSROUTE
Gateway SKUs
S2S/VNET-TO-VNET P2S AGGREGATE
SKU TUNNELS CONNECTIONS THROUGHPUT BENCHMARK
Design
Connection topologies
Start by looking at the diagrams in the About VPN Gateway article. The article contains basic diagrams, the
deployment models for each topology, and the available deployment tools you can use to deploy your
configuration.
Design basics
The following sections discuss the VPN gateway basics.
Networking services limits
Scroll through the tables to view networking services limits. The limits listed may impact your design.
About subnets
When you are creating connections, you must consider your subnet ranges. You cannot have overlapping subnet
address ranges. An overlapping subnet is when one virtual network or on-premises location contains the same
address space that the other location contains. This means that you need your network engineers for your local on-
premises networks to carve out a range for you to use for your Azure IP addressing space/subnets. You need
address space that is not being used on the local on-premises network.
Avoiding overlapping subnets is also important when you are working with VNet-to-VNet connections. If your
subnets overlap and an IP address exists in both the sending and destination VNets, VNet-to-VNet connections
fail. Azure can't route the data to the other VNet because the destination address is part of the sending VNet.
VPN Gateways require a specific subnet called a gateway subnet. All gateway subnets must be named
GatewaySubnet to work properly. Be sure not to name your gateway subnet a different name, and don't deploy
VMs or anything else to the gateway subnet. See Gateway Subnets.
About local network gateways
The local network gateway typically refers to your on-premises location. In the classic deployment model, the local
network gateway is referred to as a Local Network Site. When you configure a local network gateway, you give it a
name, specify the public IP address of the on-premises VPN device, and specify the address prefixes that are in the
on-premises location. Azure looks at the destination address prefixes for network traffic, consults the configuration
that you have specified for the local network gateway, and routes packets accordingly. You can modify the address
prefixes as needed. For more information, see Local network gateways.
About gateway types
Selecting the correct gateway type for your topology is critical. If you select the wrong type, your gateway won't
work properly. The gateway type specifies how the gateway itself connects and is a required configuration setting
for the Resource Manager deployment model.
The gateway types are:
Vpn
ExpressRoute
About connection types
Each configuration requires a specific connection type. The connection types are:
IPsec
Vnet2Vnet
ExpressRoute
VPNClient
About VPN types
Each configuration requires a specific VPN type. If you are combining two configurations, such as creating a Site-
to-Site connection and a Point-to-Site connection to the same VNet, you must use a VPN type that satisfies both
connection requirements.
PolicyBased: PolicyBased VPNs were previously called static routing gateways in the classic deployment
model. Policy-based VPNs encrypt and direct packets through IPsec tunnels based on the IPsec policies
configured with the combinations of address prefixes between your on-premises network and the Azure
VNet. The policy (or traffic selector) is usually defined as an access list in the VPN device configuration. The
value for a PolicyBased VPN type is PolicyBased. When using a PolicyBased VPN, keep in mind the
following limitations:
PolicyBased VPNs can only be used on the Basic gateway SKU. This VPN type is not compatible with
other gateway SKUs.
You can have only 1 tunnel when using a PolicyBased VPN.
You can only use PolicyBased VPNs for S2S connections, and only for certain configurations. Most VPN
Gateway configurations require a RouteBased VPN.
RouteBased: RouteBased VPNs were previously called dynamic routing gateways in the classic deployment
model. RouteBased VPNs use "routes" in the IP forwarding or routing table to direct packets into their
corresponding tunnel interfaces. The tunnel interfaces then encrypt or decrypt the packets in and out of the
tunnels. The policy (or traffic selector) for RouteBased VPNs are configured as any-to-any (or wild cards). The
value for a RouteBased VPN type is RouteBased.
The following tables show the VPN type as it maps to each connection configuration. Make sure the VPN type for
your gateway matches the configuration that you want to create.
VPN type - Resource Manager deployment model
ROUTEBASED POLICYBASED
DYNAMIC STATIC
This page walks you through the service provisioning and routing configuration workflows at a high level.
The following figure and corresponding steps show the tasks you must follow in order to have an ExpressRoute
circuit provisioned end-to-end.
1. Use PowerShell to configure an ExpressRoute circuit. Follow the instructions in the Create ExpressRoute circuits
article for more details.
2. Order connectivity from the service provider. This process varies. Contact your connectivity provider for more
details about how to order connectivity.
3. Ensure that the circuit has been provisioned successfully by verifying the ExpressRoute circuit provisioning state
through PowerShell.
4. Configure routing domains. If your connectivity provider manages Layer 3 for you, they will configure
routing for your circuit. If your connectivity provider only offers Layer 2 services, you must configure
routing per guidelines described in the routing requirements and routing configuration pages.
Enable Azure private peering - You must enable this peering to connect to VMs / cloud services deployed
within virtual networks.
Enable Azure public peering - You must enable Azure public peering if you wish to connect to Azure
services hosted on public IP addresses. This is a requirement to access Azure resources if you have
chosen to enable default routing for Azure private peering.
Enable Microsoft peering - You must enable this to access Office 365 and Dynamics 365.
IMPORTANT
You must ensure that you use a separate proxy / edge to connect to Microsoft than the one you use for the
Internet. Using the same edge for both ExpressRoute and the Internet will cause asymmetric routing and
cause connectivity outages for your network.
5. Linking virtual networks to ExpressRoute circuits - You can link virtual networks to your ExpressRoute circuit.
Follow instructions to link VNets to your circuit. These VNets can either be in the same Azure subscription as
the ExpressRoute circuit, or can be in a different subscription.
ServiceProviderProvisioningState : Provisioning
Status : Enabled
ServiceProviderProvisioningState : Provisioned
Status : Enabled
Provisioned and Enabled is the only state the circuit can be in for you to be able to use it. If you are using a Layer 2
provider, you can configure routing for your circuit only when it is in this state.
When connectivity provider is deprovisioning the circuit
If you requested the service provider to deprovision the ExpressRoute circuit, you will see the circuit set to the
following state after the service provider has completed the deprovisioning process.
ServiceProviderProvisioningState : NotProvisioned
Status : Enabled
You can choose to re-enable it if needed, or run PowerShell cmdlets to delete the circuit.
IMPORTANT
If you run the PowerShell cmdlet to delete the circuit when the ServiceProviderProvisioningState is Provisioning or
Provisioned the operation will fail. Please work with your connectivity provider to deprovision the ExpressRoute circuit first
and then delete the circuit. Microsoft will continue to bill the circuit until you run the PowerShell cmdlet to delete the circuit.
Next steps
Configure your ExpressRoute connection.
Create an ExpressRoute circuit
Configure routing
Link a VNet to an ExpressRoute circuit
What is Azure Virtual Network?
5/7/2018 • 4 minutes to read • Edit Online
Azure Virtual Network enables many types of Azure resources, such as Azure Virtual Machines (VM ), to securely
communicate with each other, the internet, and on-premises networks. Azure Virtual Network provides the
following key capabilities:
Next steps
You now have an overview of Azure Virtual Network. To get started using a virtual network, create one, deploy a
few VMs to it, and communicate between the VMs. To learn how, see the Create a virtual network quickstart.
What is Azure Load Balancer?
6/1/2018 • 16 minutes to read • Edit Online
With Azure Load Balancer you can scale your applications and create high availability for your services. Load
Balancer supports inbound and outbound scenarios, provides low latency and high throughput, and scales up to
millions of flows for all TCP and UDP applications.
Load Balancer distributes new inbound flows that arrive on the load balancer's frontend to backend pool instances,
according to rules and health probes.
Additionally, a public load balancer can provide outbound connections for virtual machines (VMs) inside your
virtual network by translating their private IP addresses to public IP addresses.
Azure Load Balancer is available in two SKUs: Basic and Standard. There are differences in scale, features, and
pricing. Any scenario that's possible with Basic Load Balancer can also be created with Standard Load Balancer,
although the approaches might differ slightly. As you learn about Load Balancer, it is important to familiarize
yourself with the fundamentals and SKU -specific differences.
NOTE
Azure provides a suite of fully managed load-balancing solutions for your scenarios. If you are looking for Transport Layer
Security (TLS) protocol termination ("SSL offload") or per-HTTP/HTTPS request, application-layer processing, review
Application Gateway. If you are looking for global DNS load balancing, review Traffic Manager. Your end-to-end scenarios
might benefit from combining these solutions as needed.
NOTE
If you are using a newer design scenario, consider using Standard Load Balancer.
Standalone VMs, availability sets, and virtual machine scale sets can be connected to only one SKU, never both.
When you use them with public IP addresses, both Load Balancer and the public IP address SKU must match.
Load Balancer and public IP SKUs are not mutable.
It is a best practice to specify the SKUs explicitly, even though it is not yet mandatory. At this time, required
changes are being kept to a minimum. If a SKU is not specified, it is interpreted as an intention to use the 2017-
08-01 API version of the Basic SKU.
IMPORTANT
Standard Load Balancer is a new Load Balancer product and largely a superset of Basic Load Balancer. There are important
and deliberate differences between the two products. Any end-to-end scenario that's possible with Basic Load Balancer can
also be created with Standard Load Balancer. If you're already used to Basic Load Balancer, you should familiarize yourself
with Standard Load Balancer to understand the latest changes in behavior between Standard and Basic and their impact.
Review this section carefully.
backend pool endpoints Any VM in a single virtual network, VMs in a single availability set or virtual
including a blend of VMs, availability machine scale set.
sets, and virtual machine scale sets.
STANDARD SKU BASIC SKU
Diagnostics Azure Monitor, multi-dimensional Azure Log Analytics for public load
metrics including byte and packet balancer only, SNAT exhaustion alert,
counters, health probe status, backend pool health count.
connection attempts (TCP SYN),
outbound connection health (SNAT
successful and failed flows), active data
plane measurements.
Secure by default By default, closed for public IP and load Default open, network security group
balancer endpoints. For traffic to flow, a optional.
network security group must be used
to explicitly whitelist entities.
Outbound connections Multiple front ends with per-rule opt- Single front end, selected at random
out. An outbound scenario must be when multiple front ends are present.
explicitly created for the VM to be able When only an internal load balancer is
to use outbound connectivity. Virtual serving a VM, the default SNAT is used.
network service endpoints can be
reached without outbound connectivity
and do not count toward data
processed. Any public IP addresses,
including Azure PaaS services that are
unavailable as virtual network service
endpoints, must be reached via
outbound connectivity and count
toward data processed. When only an
internal load balancer is serving a VM,
outbound connections via default SNAT
are unavailable. Outbound SNAT
programming is transport-protocol
specific, based on the protocol of the
inbound load-balancing rule.
SLA 99.99 percent for a data path with two Implicit in the VM SLA.
healthy VMs.
For more information, see service limits for Load Balancer. For Standard Load Balancer details, see overview,
pricing, and SL A.
Concepts
Public load balancer
A public load balancer maps the public IP address and port number of incoming traffic to the private IP address
and port number of the VM, and vice versa for the response traffic from the VM. By applying load-balancing rules,
you can distribute specific types of traffic across multiple VMs or services. For example, you can spread the load of
web request traffic across multiple web servers.
The following figure shows a load-balanced endpoint for web traffic that is shared among three VMs for the public
and private TCP port 80. These three VMs are in a load-balanced set.
Figure: Load balancing multi-tier applications by using both public and internal load balancers
Pricing
Standard Load Balancer usage is charged based on the number of configured load-balancing rules and the
amount of processed inbound and outbound data. For Standard Load Balancer pricing information, go to the Load
Balancer pricing page.
Basic Load Balancer is offered at no charge.
SLA
For information about the Standard Load Balancer SL A, go to the Load Balancer SL A page.
Limitations
Load Balancer is a TCP or UDP product for load balancing and port forwarding for these specific IP protocols.
Load balancing rules and inbound NAT rules are supported for TCP and UDP and not supported for other IP
protocols including ICMP. Load Balancer does not terminate, respond, or otherwise interact with the payload of
a UDP or TCP flow. It is not a proxy. Successful validation of connectivity to a frontend must take place in-band
with the same protocol used in a load balancing or inbound NAT rule (TCP or UDP ) and at least one of your
virtual machines must generate a response for a client to see a response from a frontend. Not receiving an in-
band response from the Load Balancer frontend indicates no virtual machines were able to respond. It is not
possible to interact with a Load Balancer frontend without a virtual machine able to respond. This also applies
to outbound connections where port masquerade SNAT is only supported for TCP and UDP; any other IP
protocols including ICMP will also fail. Assign an instance-level Public IP address to mitigate.
Unlike public Load Balancers which provide outbound connections when transitioning from private IP
addresses inside the virtual network to public IP addresses, internal Load Balancers do not translate outbound
originated connections to the frontend of an internal Load Balancer as both are in private IP address space. This
avoids potential for SNAT exhaustion inside unique internal IP address space where translation is not required.
The side effect is that if an outbound flow from a VM in the backend pool attempts a flow to frontend of the
internal Load Balancer in which pool it resides and is mapped back to itself, both legs of the flow don't match
and the flow will fail. If the flow did not map back to the same VM in the backend pool which created the flow
to the frontend, the flow will succeed. When the flow maps back to itself the outbound flow appears to originate
from the VM to the frontend and the corresponding inbound flow appears to originate from the VM to itself.
From the guest OS's point of view, the inbound and outbound parts of the same flow don't match inside the
virtual machine. The TCP stack will not recognize these halves of the same flow as being part of the same flow
as the source and destination don't match. When the flow maps to to any other VM in the backend pool, the
halves of the flow will match and the VM can successfully respond to the flow. The symptom for this scenario is
intermittent connection timeouts. There are several common workarounds for reliably achieving this scenario
(originating flows from a backend pool to the backend pools respective internal Load Balancer frontend) which
include either insertion of a third party proxy behind the internal Load Balancer or using DSR style rules. While
you could use a public Load Balancer to mitigate, the resulting scenario is prone to SNAT exhaustion and
should be avoided unless carefully managed.
Next steps
You now have an overview of Azure Load Balancer. To get started with using a load balancer, create one, create
VMs with a custom IIS extension installed, and load-balance the web app between the VMs. To learn how, see the
Create a Basic Load Balancer quickstart.
What is Azure Application Gateway?
4/25/2018 • 4 minutes to read • Edit Online
Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web
applications.
Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP ) and route traffic based on
source IP address and port, to a destination IP address and port. But with the Application Gateway you can be
even more specific. For example, you can route traffic based on the incoming URL. So if /images is in the
incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If /video
is in the URL, that traffic is routed to another pool optimized for videos.
This type of routing is known as application layer (OSI layer 7) load balancing. Azure Application Gateway can do
URL -based routing and more. The following features are included with Azure Application Gateway:
URL-based routing
URL Path Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request.
One of the scenarios is to route requests for different content types to different pool.
For example, requests for are routed to VideoServerPool, and
http://contoso.com/video/*
http://contoso.com/images/* are routed to ImageServerPool. DefaultServerPool is selected if none of the path
patterns match.
Redirection
A common scenario for many web applications is to support automatic HTTP to HTTPS redirection to ensure all
communication between an application and its users occurs over an encrypted path.
In the past, you may have used techniques such as creating a dedicated pool whose sole purpose is to redirect
requests it receives on HTTP to HTTPS. Application gateway supports the ability to redirect traffic on the
Application Gateway. This simplifies application configuration, optimizes the resource usage, and supports new
redirection scenarios, including global and path-based redirection. Application Gateway redirection support is not
limited to HTTP to HTTPS redirection alone. This is a generic redirection mechanism, so you can redirect from
and to any port you define using rules. It also supports redirection to an external site as well.
Application Gateway redirection support offers the following capabilities:
Global redirection from one port to another port on the Gateway. This enables HTTP to HTTPS redirection on
a site.
Path-based redirection. This type of redirection enables HTTP to HTTPS redirection only on a specific site area,
for example a shopping cart area denoted by /cart/* .
Redirect to an external site.
Multiple-site hosting
Multiple-site hosting enables you to configure more than one web site on the same application gateway instance.
This feature allows you to configure a more efficient topology for your deployments by adding up to 20 web sites
to one application gateway. Each web site can be directed to its own pool. For example, application gateway can
serve traffic for contoso.com and fabrikam.com from two server pools called ContosoServerPool and
FabrikamServerPool.
Requests for http://contoso.com are routed to ContosoServerPool, and http://fabrikam.com are routed to
FabrikamServerPool.
Similarly, two subdomains of the same parent domain can be hosted on the same application gateway
deployment. Examples of using subdomains could include http://blog.contoso.com and http://app.contoso.com
hosted on a single application gateway deployment.
Session affinity
The cookie-based session affinity feature is useful when you want to keep a user session on the same server. By
using gateway-managed cookies, the Application Gateway can direct subsequent traffic from a user session to the
same server for processing. This is important in cases where session state is saved locally on the server for a user
session.
Next steps
Depending on your requirements and environment, you can create a test Application Gateway using either the
Azure portal, Azure PowerShell, or Azure CLI:
Quickstart: Direct web traffic with Azure Application Gateway - Azure portal.
Quickstart: Direct web traffic with Azure Application Gateway - Azure PowerShell
Quickstart: Direct web traffic with Azure Application Gateway - Azure CLI
What is Azure DNS?
6/13/2018 • 2 minutes to read • Edit Online
Azure DNS is a hosting service for DNS domains, providing name resolution using Microsoft Azure infrastructure.
By hosting your domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and
billing as your other Azure services.
You can't use Azure DNS to buy a domain name. For an annual fee, you can buy a domain name using Azure Web
Apps or a third-party domain name registrar. Your domains can then be hosted in Azure DNS for record
management. See Delegate a Domain to Azure DNS for details.
The following features are included with Azure DNS:
Security
The Azure DNS service is based on Azure Resource Manager. So, you get Resource Manager features such as:
role-based access control - to control who has access to specific actions for your organization.
activity logs - to monitor how a user in your organization modified a resource or to find an error when
troubleshooting.
resource locking - to lock a subscription, resource group, or resource to prevent other users in your
organization from accidentally deleting or modifying critical resources.
For more information, see How to protect DNS zones and records.
Ease of use
The Azure DNS service can manage DNS records for your Azure services, and can provide DNS for your external
resources as well. Azure DNS is integrated in the Azure portal and uses the same credentials, support contract, and
billing as your other Azure services.
DNS billing is based on the number of DNS zones hosted in Azure and by the number of DNS queries. To learn
more about pricing, see Azure DNS Pricing.
Your domains and records can be managed using the Azure portal, Azure PowerShell cmdlets, and the cross-
platform Azure CLI. Applications requiring automated DNS management can integrate with the service using the
REST API and SDKs.
Microsoft Azure Traffic Manager allows you to control the distribution of user traffic for service endpoints in
different datacenters. Service endpoints supported by Traffic Manager include Azure VMs, Web Apps, and cloud
services. You can also use Traffic Manager with external, non-Azure endpoints.
Traffic Manager uses the Domain Name System (DNS ) to direct client requests to the most appropriate endpoint
based on a traffic-routing method and the health of the endpoints. Traffic Manager provides a range of traffic-
routing methods and endpoint monitoring options to suit different application needs and automatic failover
models. Traffic Manager is resilient to failure, including the failure of an entire Azure region.
NOTE
When using a vanity domain with Azure Traffic Manager, you must use a CNAME to point your vanity domain name to your
Traffic Manager domain name. DNS standards do not allow you to create a CNAME at the 'apex' (or root) of a domain. Thus
you cannot create a CNAME for 'contoso.com' (sometimes called a 'naked' domain). You can only create a CNAME for a
domain under 'contoso.com', such as 'www.contoso.com'. To work around this limitation, we recommend using a simple
HTTP redirect to direct requests for 'contoso.com' to an alternative name such as 'www.contoso.com'.
Pricing
For pricing information, see Traffic Manager Pricing.
FAQ
For frequently asked questions about Traffic Manager, see Traffic Manager FAQs
Next steps
Learn more about Traffic Manager endpoint monitoring and automatic failover.
Learn more about Traffic Manager traffic routing methods.
Learn about some of the other key networking capabilities of Azure.
What is VPN Gateway?
4/25/2018 • 10 minutes to read • Edit Online
A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an
Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to
send encrypted traffic between Azure virtual networks over the Microsoft network. Each virtual network can have
only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you
create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.
Azure Supported Services Cloud Services and Virtual Cloud Services and Virtual Services list
Machines Machines
Typical Bandwidths Based on the gateway SKU Typically < 1 Gbps 50 Mbps, 100 Mbps, 200
aggregate Mbps, 500 Mbps, 1 Gbps, 2
Gbps, 5 Gbps, 10 Gbps
Typical use case Prototyping, dev / test / lab Dev / test / lab scenarios Access to all Azure services
scenarios for cloud services and small scale production (validated list), Enterprise-
and virtual machines workloads for cloud services class and mission critical
and virtual machines workloads, Backup, Big Data,
Azure as a DR site
Gateway SKUs
When you create a virtual network gateway, you specify the gateway SKU that you want to use. Select the SKU
that satisfies your requirements based on the types of workloads, throughputs, features, and SL As. For more
information about gateway SKUs, including supported features, production and dev-test, and configuration steps,
see Gateway SKUs.
Gateway SKUs by tunnel, connection, and throughput
S2S/VNET-TO-VNET P2S AGGREGATE
SKU TUNNELS CONNECTIONS THROUGHPUT BENCHMARK
Multi-Site
This type of connection is a variation of the Site-to-Site connection. You create more than one VPN connection
from your virtual network gateway, typically connecting to multiple on-premises sites. When working with
multiple connections, you must use a RouteBased VPN type (known as a dynamic gateway when working with
classic VNets). Because each virtual network can only have one VPN gateway, all connections through the
gateway share the available bandwidth. This type of connection is often called a "multi-site" connection.
Deployment models and methods for Site -to -Site and Multi-Site
DEPLOYMENT
MODEL/METHOD AZURE PORTAL POWERSHELL AZURE CLI
(**) denotes that this method contains steps that require PowerShell.
(+) denotes that this article is written for multi-site connections.
RADIUS authentication
(+) denotes this deployment method is available only for VNets in the same subscription.
(*) denotes that this deployment method also requires PowerShell.
Pricing
You pay for two things: the hourly compute costs for the virtual network gateway, and the egress data transfer
from the virtual network gateway. Pricing information can be found on the Pricing page.
Virtual network gateway compute costs
Each virtual network gateway has an hourly compute cost. The price is based on the gateway SKU that you
specify when you create a virtual network gateway. The cost is for the gateway itself and is in addition to the data
transfer that flows through the gateway.
Data transfer costs
Data transfer costs are calculated based on egress traffic from the source virtual network gateway.
If you are sending traffic to your on-premises VPN device, it will be charged with the Internet egress data
transfer rate.
If you are sending traffic between virtual networks in different regions, the pricing is based on the region.
If you are sending traffic only between virtual networks that are in the same region, there are no data costs.
Traffic between VNets in the same region is free.
For more information about gateway SKUs for VPN Gateway, see Gateway SKUs.
FAQ
For frequently asked questions about VPN gateway, see the VPN Gateway FAQ.
Next steps
Plan your VPN gateway configuration. See VPN Gateway Planning and Design.
View the VPN Gateway FAQ for additional information.
View the Subscription and service limits.
Learn about some of the other key networking capabilities of Azure.
ExpressRoute overview
3/12/2018 • 5 minutes to read • Edit Online
Microsoft Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private
connection facilitated by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft
cloud services, such as Microsoft Azure, Office 365, and Dynamics 365.
Connectivity can be from an any-to-any (IP VPN ) network, a point-to-point Ethernet network, or a virtual cross-
connection through a connectivity provider at a co-location facility. ExpressRoute connections do not go over the
public Internet. This lets ExpressRoute connections offer more reliability, faster speeds, lower latencies, and higher
security than typical connections over the Internet. For information on how to connect your network to Microsoft
using ExpressRoute, see ExpressRoute connectivity models.
Key benefits
Layer 3 connectivity between your on-premises network and the Microsoft Cloud through a connectivity
provider. Connectivity can be from an any-to-any (IPVPN ) network, a point-to-point Ethernet connection, or
through a virtual cross-connection via an Ethernet exchange.
Connectivity to Microsoft cloud services across all regions in the geopolitical region.
Global connectivity to Microsoft services across all regions with ExpressRoute premium add-on.
Dynamic routing between your network and Microsoft over industry standard protocols (BGP ).
Built-in redundancy in every peering location for higher reliability.
Connection uptime SL A.
QoS support for Skype for Business.
For more information, see the ExpressRoute FAQ.
Features
Layer 3 connectivity
Microsoft uses industry standard dynamic routing protocol (BGP ) to exchange routes between your on-premises
network, your instances in Azure, and Microsoft public addresses. We establish multiple BGP sessions with your
network for different traffic profiles. More details can be found in the ExpressRoute circuit and routing domains
article.
Redundancy
Each ExpressRoute circuit consists of two connections to two Microsoft Enterprise edge routers (MSEEs) from the
connectivity provider/your network edge. Microsoft requires dual BGP connection from the connectivity
provider/your side – one to each MSEE. You may choose not to deploy redundant devices/Ethernet circuits at your
end. However, connectivity providers use redundant devices to ensure that your connections are handed off to
Microsoft in a redundant manner. A redundant Layer 3 connectivity configuration is a requirement for our SL A to
be valid.
Connectivity to Microsoft cloud services
ExpressRoute connections enable access to the following services:
Microsoft Azure services
Microsoft Office 365 services
Microsoft Dynamics 365
NOTE
Software as a Service offerings, like Office 365 and Dynamics 365, were created to be accessed securely and reliably via the
Internet. Because of this, we recommend ExpressRoute for these applications only for specific scenarios. For information
about using ExpressRoute to access Office 365, visit Azure ExpressRoute for Office 365.
For a detailed list of services supported over ExpressRoute, visit the ExpressRoute FAQ page
Connectivity to all regions within a geopolitical region
You can connect to Microsoft in one of our peering locations and have access to all regions within the geopolitical
region.
For example, if you connected to Microsoft in Amsterdam through ExpressRoute, you have access to all Microsoft
cloud services hosted in Northern Europe and Western Europe. For an overview of the geopolitical regions, the
associated Microsoft cloud regions, and corresponding ExpressRoute peering locations, see the ExpressRoute
partners and peering locations article.
Global connectivity with ExpressRoute premium add-on
You can enable the ExpressRoute premium add-on feature to extend connectivity across geopolitical boundaries.
For example, if you are connected to Microsoft in Amsterdam through ExpressRoute, you will have access to all
Microsoft cloud services hosted in all regions across the world (national clouds are excluded). You can access
services deployed in South America or Australia the same way you access the North and West Europe regions.
Rich connectivity partner ecosystem
ExpressRoute has a constantly growing ecosystem of connectivity providers and SI partners. For the latest
information, refer to the ExpressRoute providers and locations article.
Connectivity to national clouds
Microsoft operates isolated cloud environments for special geopolitical regions and customer segments. Refer to
the ExpressRoute providers and locations page for a list of national clouds and providers.
Bandwidth options
You can purchase ExpressRoute circuits for a wide range of bandwidths. Supported bandwidths are listed below.
Be sure to check with your connectivity provider to determine the list of supported bandwidths they provide.
50 Mbps
100 Mbps
200 Mbps
500 Mbps
1 Gbps
2 Gbps
5 Gbps
10 Gbps
Dynamic scaling of bandwidth
You can increase the ExpressRoute circuit bandwidth (on a best effort basis) without having to tear down your
connections.
Flexible billing models
You can pick a billing model that works best for you. Choose between the billing models listed below. For more
information, see the ExpressRoute FAQ.
Unlimited data. The ExpressRoute circuit is charged based on a monthly fee, and all inbound and outbound
data transfer is included free of charge.
Metered data. The ExpressRoute circuit is charged based on a monthly fee. All inbound data transfer is free of
charge. Outbound data transfer is charged per GB of data transfer. Data transfer rates vary by region.
ExpressRoute premium add-on. The ExpressRoute premium is an add-on over the ExpressRoute circuit. The
ExpressRoute premium add-on provides the following capabilities:
Increased route limits for Azure public and Azure private peering from 4,000 routes to 10,000 routes.
Global connectivity for services. An ExpressRoute circuit created in any region (excluding national
clouds) will have access to resources across any other region in the world. For example, a virtual
network created in West Europe can be accessed through an ExpressRoute circuit provisioned in Silicon
Valley.
Increased number of VNet links per ExpressRoute circuit from 10 to a larger limit, depending on the
bandwidth of the circuit.
FAQ
For frequently asked questions about ExpressRoute, see the ExpressRoute FAQ.
Next steps
Learn about ExpressRoute connectivity models.
Learn about ExpressRoute connections and routing domains. See ExpressRoute circuits and routing domains.
Find a service provider. See ExpressRoute partners and peering locations.
Ensure that all prerequisites are met. See ExpressRoute prerequisites.
Refer to the requirements for Routing, NAT, and QoS.
Configure your ExpressRoute connection.
Create an ExpressRoute circuit
Configure peering for an ExpressRoute circuit
Connect a virtual network to an ExpressRoute circuit
Learn about some of the other key networking capabilities of Azure.
Quickstart: Create a virtual network using the Azure
portal
4/9/2018 • 3 minutes to read • Edit Online
A virtual network enables Azure resources, such as virtual machines (VM ), to communicate privately with each
other, and with the internet. In this quickstart, you learn how to create a virtual network. After creating a virtual
network, you deploy two VMs into the virtual network. You then connect to one VM from the internet, and
communicate privately between the two VMs.
If you don't have an Azure subscription, create a free account before you begin.
Log in to Azure
Log in to the Azure portal at https://portal.azure.com.
SETTING VALUE
Name myVirtualNetwork
SETTING VALUE
Name myVm1
2. After selecting the Connect button, a Remote Desktop Protocol (.rdp) file is created and downloaded to
your computer.
3. Open the downloaded rdp file. If prompted, select Connect. Enter the user name and password you specified
when creating the VM. You may need to select More choices, then Use a different account, to specify the
credentials you entered when you created the VM.
4. Select OK.
5. You may receive a certificate warning during the sign-in process. If you receive the warning, select Yes or
Continue, to proceed with the connection.
Clean up resources
When no longer needed, delete the resource group and all of the resources it contains:
1. Enter myResourceGroup in the Search box at the top of the portal. When you see myResourceGroup in the
search results, select it.
2. Select Delete resource group.
3. Enter myResourceGroup for TYPE THE RESOURCE GROUP NAME: and select Delete.
Next steps
In this quickstart, you created a default virtual network and two VMs. You connected to one VM from the internet
and communicated privately between the VM and another VM. To learn more about virtual network settings, see
Manage a virtual network.
By default, Azure allows unrestricted private communication between virtual machines, but only allows inbound
remote desktop connections to Windows VMs from the internet. To learn how to allow or restrict different types of
network communication to and from VMs, advance to the Filter network traffic tutorial.
2 minutes to read
Create an application gateway using the Azure portal
5/1/2018 • 3 minutes to read • Edit Online
You can use the Azure portal to create or manage application gateways. This quickstart shows you how to create
network resources, backend servers, and an application gateway.
If you don't have an Azure subscription, create a free account before you begin.
Log in to Azure
Log in to the Azure portal at http://portal.azure.com
Set-AzureRmVMExtension `
-ResourceGroupName myResourceGroupAG `
-ExtensionName IIS `
-VMName myVM `
-Publisher Microsoft.Compute `
-ExtensionType CustomScriptExtension `
-TypeHandlerVersion 1.4 `
-SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content
-Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
-Location EastUS
3. Create a second virtual machine and install IIS using the steps that you just finished. Enter myVM2 for its
name and for VMName in Set-AzureRmVMExtension.
Add backend servers
1. Click All resources, and then click myAppGateway.
2. Click Backend pools. A default pool was automatically created with the application gateway. Click
appGatewayBackendPool.
3. Click Add target to add each virtual machine that you created to the backend pool.
4. Click Save.
2. Copy the public IP address, and then paste it into the address bar of your browser.
Clean up resources
When no longer needed, delete the resource group, application gateway, and all related resources. To do so, select
the resource group that contains the application gateway and click Delete.
Next steps
In this quickstart, you created a resource group, network resources, and backend servers. You then used those
resources to create an application gateway. To learn more about application gateways and their associated
resources, continue to the how -to articles.
Create an application gateway with a web application
firewall using the Azure portal
5/1/2018 • 4 minutes to read • Edit Online
You can use the Azure portal to create an application gateway with a web application firewall (WAF ). The WAF uses
OWASP rules to protect your application. These rules include protection against attacks such as SQL injection,
cross-site scripting attacks, and session hijacks.
In this article, you learn how to:
Create an application gateway with WAF enabled
Create the virtual machines used as backend servers
Create a storage account and configure diagnostics
Log in to Azure
Log in to the Azure portal at http://portal.azure.com
3. Enter myBackendSubnet for the name of the subnet and then click OK.
Set-AzureRmVMExtension `
-ResourceGroupName myResourceGroupAG `
-ExtensionName IIS `
-VMName myVM `
-Publisher Microsoft.Compute `
-ExtensionType CustomScriptExtension `
-TypeHandlerVersion 1.4 `
-SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content
-Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
-Location EastUS
3. Create a second virtual machine and install IIS using the steps that you just finished. Enter myVM2 for its
name and for VMName in Set-AzureRmVMExtension.
Add backend servers
1. Click All resources, and then click myAppGateway.
2. Click Backend pools. A default pool was automatically created with the application gateway. Click
appGateayBackendPool.
3. Click Add target to add each virtual machine that you created to the backend pool.
4. Click Save.
Configure diagnostics
Configure diagnostics to record data into the ApplicationGatewayAccessLog, ApplicationGatewayPerformanceLog,
and ApplicationGatewayFirewallLog logs.
1. In the left-hand menu, click All resources, and then select myAppGateway.
2. Under Monitoring, click Diagnostics logs.
3. Click Add diagnostics setting.
4. Enter myDiagnosticsSettings as the name for the diagnostics settings.
5. Select Archive to a storage account, and then click Configure to select the myagstore1 storage account that
you previously created.
6. Select the application gateway logs to collect and retain.
7. Click Save.
Test the application gateway
1. Find the public IP address for the application gateway on the Overview screen. Click All resources and then
click myAGPublicIPAddress.
2. Copy the public IP address, and then paste it into the address bar of your browser.
Next steps
In this article, you learned how to:
Create an application gateway with WAF enabled
Create the virtual machines used as backend servers
Create a storage account and configure diagnostics
To learn more about application gateways and their associated resources, continue to the how -to articles.
Configure the geographic traffic routing method
using Traffic Manager
2/16/2018 • 3 minutes to read • Edit Online
The Geographic traffic routing method allows you to direct traffic to specific endpoints based on the geographic
location where the requests originate. This tutorial shows you how to create a Traffic Manager profile with this
routing method and configure the endpoints to receive traffic from specific geographies.
Add endpoints
1. Search for the Traffic Manager profile name you created in the portal’s search bar and click on the result when it
is shown.
2. Navigate to Settings -> Endpoints in Traffic Manager.
3. Click Add to show the Add Endpoint.
4. Click Add and in the Add endpoint that is displayed, complete as follows:
5. Select Type depending upon the type of endpoint you are adding. For geographic routing profiles used in
production, we strongly recommend using nested endpoint types containing a child profile with more than one
endpoint. For more details, see FAQs about geographic traffic routing methods.
6. Provide a Name by which you want to recognize this endpoint.
7. Certain fields on this page depend on the type of endpoint you are adding:
a. If you are adding an Azure endpoint, select the Target resource type and the Target based on the
resource you want to direct traffic to
b. If you are adding an External endpoint, provide the Fully-qualified domain name (FQDN ) for your
endpoint.
c. If you are adding a Nested endpoint, select the Target resource that corresponds to the child profile
you want to use and specify the Minimum child endpoints count.
8. In the Geo-mapping section, use the drop down to add the regions from where you want traffic to be sent to
this endpoint. You must add at least one region, and you can have multiple regions mapped.
9. Repeat this for all endpoints you want to add under this profile
Azure Application Gateway can be configured with an Internet-facing VIP or with an internal endpoint that is not
exposed to the Internet, also known as an internal load balancer (ILB ) endpoint. Configuring the gateway with an
ILB is useful for internal line-of-business applications that are not exposed to the Internet. It's also useful for
services and tiers within a multi-tier application that sit in a security boundary that is not exposed to the Internet
but still require round-robin load distribution, session stickiness, or Secure Sockets Layer (SSL ) termination.
This article walks you through the steps to configure an application gateway with an ILB.
Connect-AzureRmAccount
Step 2
Check the subscriptions for the account.
Get-AzureRmSubscription
Step 4
Create a new resource group (skip this step if you're using an existing resource group).
Azure Resource Manager requires that all resource groups specify a location. This is used as the default location for
resources in that resource group. Make sure that all commands to create an application gateway uses the same
resource group.
In the preceding example, we created a resource group called "appgw -rg" and location "West US".
This step assigns the address range 10.0.0.0/24 to a subnet variable to be used to create a virtual network.
Step 2
This step creates a virtual network named "appgwvnet" in resource group "appgw -rg" for the West US region
using the prefix 10.0.0.0/16 with subnet 10.0.0.0/24.
Step 3
$subnet = $vnet.subnets[0]
This step assigns the subnet object to variable $subnet for the next steps.
This step creates an application gateway IP configuration named "gatewayIP01". When Application Gateway starts,
it picks up an IP address from the subnet configured and route network traffic to the IP addresses in the back-end
IP pool. Keep in mind that each instance takes one IP address.
Step 2
This step configures the back-end IP address pool named "pool01" with IP addresses "10.1.1.8, 10.1.1.9, 10.1.1.10".
Those are the IP addresses that receive the network traffic that comes from the front-end IP endpoint. You replace
the preceding IP addresses to add your own application IP address endpoints.
Step 3
This step configures application gateway setting "poolsetting01" for the load balanced network traffic in the back-
end pool.
Step 4
This step configures the front-end IP port named "frontendport01" for the ILB.
Step 5
This step creates the front-end IP configuration called "fipconfig01" and associates it with a private IP from the
current virtual network subnet.
Step 6
This step creates the listener called "listener01" and associates the front-end port to the front-end IP configuration.
Step 7
$rule = New-AzureRmApplicationGatewayRequestRoutingRule -Name rule01 -RuleType Basic -BackendHttpSettings
$poolSetting -HttpListener $listener -BackendAddressPool $pool
This step creates the load balancer routing rule called "rule01" that configures the load balancer behavior.
Step 8
NOTE
The default value for InstanceCount is 2, with a maximum value of 10. The default value for GatewaySize is Medium. You can
choose between Standard_Small, Standard_Medium, and Standard_Large.
This step creates an application gateway with all configuration items from the preceding steps. In the example, the
application gateway is called "appgwtest".
Step 2
Use Stop-AzureRmApplicationGateway to stop the application gateway. This sample shows the
Stop-AzureRmApplicationGateway cmdlet on the first line, followed by the output.
Once the application gateway is in a stopped state, use the Remove-AzureRmApplicationGateway cmdlet to remove the
service.
NOTE
The -force switch can be used to suppress the remove confirmation message.
To verify that the service has been removed, you can use the Get-AzureRmApplicationGateway cmdlet. This step is not
required.
Next steps
If you want to configure SSL offload, see Configure an application gateway for SSL offload.
If you want to configure an application gateway to use with an ILB, see Create an application gateway with an
internal load balancer (ILB ).
If you want more information about load balancing options in general, see:
Azure Load Balancer
Azure Traffic Manager
2 minutes to read
Configure a VNet-to-VNet VPN gateway connection
using the Azure portal
4/18/2018 • 18 minutes to read • Edit Online
This article helps you connect virtual networks by using the VNet-to-VNet connection type. The virtual networks
can be in the same or different regions, and from the same or different subscriptions. When connecting VNets
from different subscriptions, the subscriptions do not need to be associated with the same Active Directory tenant.
The steps in this article apply to the Resource Manager deployment model and use the Azure portal. You can also
create this configuration using a different deployment tool or deployment model by selecting a different option
from the following list:
This article helps you connect VNets using the VNet-to-VNet connection type. When using these steps as an
exercise, you can use the example settings values. In the example, the virtual networks are in the same
subscription, but in different resource groups. If your VNets are in different subscriptions, you can't create the
connection in the portal. You can use PowerShell or CLI. For more information about VNet-to-VNet connections,
see the VNet-to-VNet FAQ at the end of this article.
Example settings
Values for TestVNet1:
VNet Name: TestVNet1
Address space: 10.11.0.0/16
Subscription: Select the subscription you want to use
Resource Group: TestRG1
Location: East US
Subnet Name: FrontEnd
Subnet Address range: 10.11.0.0/24
Gateway Subnet name: GatewaySubnet (this will auto-fill in the portal)
Gateway Subnet address range: 10.11.255.0/27
DNS Server: Use the IP address of your DNS Server
Virtual Network Gateway Name: TestVNet1GW
Gateway Type: VPN
VPN type: Route-based
SKU: Select the Gateway SKU you want to use
Public IP address name: TestVNet1GWIP
Connection Name: TestVNet1toTestVNet4
Shared key: You can create the shared key yourself. For this example, we'll use abc123. The important thing is
that when you create the connection between the VNets, the value must match.
Values for TestVNet4:
VNet Name: TestVNet4
Address space: 10.41.0.0/16
Subscription: Select the subscription you want to use
Resource Group: TestRG4
Location: West US
Subnet Name: FrontEnd
Subnet Address range: 10.41.0.0/24
GatewaySubnet name: GatewaySubnet (this will auto-fill in the portal)
GatewaySubnet address range: 10.41.255.0/27
DNS Server: Use the IP address of your DNS Server
Virtual Network Gateway Name: TestVNet4GW
Gateway Type: VPN
VPN type: Route-based
SKU: Select the Gateway SKU you want to use
Public IP address name: TestVNet4GWIP
Connection Name: TestVNet4toTestVNet1
Shared key: You can create the shared key yourself. For this example, we'll use abc123. The important thing is
that when you create the connection between the VNets, the value must match.
NOTE
In order for this VNet to connect to an on-premises location you need to coordinate with your on-premises network
administrator to carve out an IP address range that you can use specifically for this virtual network. If a duplicate address
range exists on both sides of the VPN connection, traffic does not route the way you may expect it to. Additionally, if you
want to connect this VNet to another VNet, the address space cannot overlap with other VNet. Take care to plan your
network configuration accordingly.
1. From a browser, navigate to the Azure portal and, if necessary, sign in with your Azure account.
2. Click +. In the Search the marketplace field, type "Virtual Network". Locate Virtual Network from the
returned list and click to open the Virtual Network page.
3. Near the bottom of the Virtual Network page, from the Select a deployment model list, select Resource
Manager, and then click Create.
4. On the Create virtual network page, configure the VNet settings. When you fill in the fields, the red
exclamation mark becomes a green check mark when the characters entered in the field are valid. There
may be values that are auto-filled. If so, replace the values with your own. The Create virtual network
page looks similar to the following example:
IMPORTANT
When working with gateway subnets, avoid associating a network security group (NSG) to the gateway subnet. Associating a
network security group to this subnet may cause your VPN gateway to stop functioning as expected. For more information
about network security groups, see What is a network security group?
4. The Name for your subnet is automatically filled in with the value 'GatewaySubnet'. This value is required
in order for Azure to recognize the subnet as the gateway subnet. Adjust the auto-filled Address range
values to match your configuration requirements, then click OK at the bottom of the page to create the
subnet.
4. Verify the settings. You can select Pin to dashboard at the bottom of the page if you want your gateway to
appear on the dashboard.
5. Click Create to begin creating the VPN gateway. The settings are validated and you'll see the "Deploying
Virtual network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need
to refresh your portal page to see the completed status.
After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in
the portal. The gateway appears as a connected device. You can click the connected device (your virtual network
gateway) to view more information.
3. On the Add connection page, in the name field, type a name for your connection. For example,
TestVNet1toTestVNet4.
4. For Connection type, select VNet-to-VNet from the dropdown.
5. The First virtual network gateway field value is automatically filled in because you are creating this
connection from the specified virtual network gateway.
6. The Second virtual network gateway field is the virtual network gateway of the VNet that you want to
create a connection to. Click Choose another virtual network gateway to open the Choose virtual
network gateway page.
7. View the virtual network gateways that are listed on this page. Notice that only virtual network gateways that
are in your subscription are listed. If you want to connect to a virtual network gateway that is not in your
subscription, please use the PowerShell article.
8. Click the virtual network gateway that you want to connect to.
9. In the Shared key field, type a shared key for your connection. You can generate or create this key yourself. In
a site-to-site connection, the key you use would be exactly the same for your on-premises device and your
virtual network gateway connection. The concept is similar here, except that rather than connecting to a VPN
device, you are connecting to another virtual network gateway.
10. Click OK at the bottom of the page to save your changes.
When data begins flowing, you see values for Data in and Data out.
VNet-to-VNet FAQ
View the FAQ details for additional information about VNet-to-VNet connections.
The VNet-to-VNet FAQ applies to VPN Gateway connections. If you are looking for VNet Peering, see Virtual
Network Peering
Does Azure charge for traffic between VNets?
VNet-to-VNet traffic within the same region is free for both directions when using a VPN gateway connection.
Cross region VNet-to-VNet egress traffic is charged with the outbound inter-VNet data transfer rates based on
the source regions. Refer to the VPN Gateway pricing page for details. If you are connecting your VNets using
VNet Peering, rather than VPN Gateway, see the Virtual Network pricing page.
Does VNet-to -VNet traffic travel across the Internet?
No. VNet-to-VNet traffic travels across the Microsoft Azure backbone, not the Internet.
Can I establish a VNet-to -VNet connection across AAD Tenants?
Yes, VNet-to-VNet connections using Azure VPN gateways work across AAD Tenants.
Is VNet-to -VNet traffic secure?
Yes, it is protected by IPsec/IKE encryption.
Do I need a VPN device to connect VNets together?
No. Connecting multiple Azure virtual networks together doesn't require a VPN device unless cross-premises
connectivity is required.
Do my VNets need to be in the same region?
No. The virtual networks can be in the same or different Azure regions (locations).
If the VNets are not in the same subscription, do the subscriptions need to be associated with the same AD
tenant?
No.
Can I use VNet-to -VNet to connect virtual networks in separate Azure instances?
No. VNet-to-VNet supports connecting virtual networks within the same Azure instance. For example, you can’t
create a connection between public Azure and the Chinese / German / US Gov Azure instances. For these
scenarios, consider using a Site-to-Site VPN connection.
Can I use VNet-to -VNet along with multi-site connections?
Yes. Virtual network connectivity can be used simultaneously with multi-site VPNs.
How many on-premises sites and virtual networks can one virtual network connect to?
See Gateway requirements table.
Can I use VNet-to -VNet to connect VMs or cloud services outside of a VNet?
No. VNet-to-VNet supports connecting virtual networks. It does not support connecting virtual machines or cloud
services that are not in a virtual network.
Can a cloud service or a load balancing endpoint span VNets?
No. A cloud service or a load balancing endpoint can't span across virtual networks, even if they are connected
together.
Can I used a PolicyBased VPN type for VNet-to -VNet or Multi-Site connections?
No. VNet-to-VNet and Multi-Site connections require Azure VPN gateways with RouteBased (previously called
Dynamic Routing) VPN types.
Can I connect a VNet with a RouteBased VPN Type to another VNet with a PolicyBased VPN type?
No, both virtual networks MUST be using route-based (previously called Dynamic Routing) VPNs.
Do VPN tunnels share bandwidth?
Yes. All VPN tunnels of the virtual network share the available bandwidth on the Azure VPN gateway and the
same VPN gateway uptime SL A in Azure.
Are redundant tunnels supported?
Redundant tunnels between a pair of virtual networks are supported when one virtual network gateway is
configured as active-active.
Can I have overlapping address spaces for VNet-to -VNet configurations?
No. You can't have overlapping IP address ranges.
Can there be overlapping address spaces among connected virtual networks and on-premises local sites?
No. You can't have overlapping IP address ranges.
Next steps
See Network Security for information about how you can limit network traffic to resources in a virtual network.
See Virtual network traffic routing for information about how Azure routes traffic between Azure, on-premises,
and Internet resources.
Create a Site-to-Site connection in the Azure portal
4/9/2018 • 19 minutes to read • Edit Online
This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your
on-premises network to the VNet. The steps in this article apply to the Resource Manager deployment model. You
can also create this configuration using a different deployment tool or deployment model by selecting a different
option from the following list:
A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network
over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-
premises that has an externally facing public IP address assigned to it. For more information about VPN gateways,
see About VPN gateway.
NOTE
In order for this VNet to connect to an on-premises location you need to coordinate with your on-premises network
administrator to carve out an IP address range that you can use specifically for this virtual network. If a duplicate address
range exists on both sides of the VPN connection, traffic does not route the way you may expect it to. Additionally, if you
want to connect this VNet to another VNet, the address space cannot overlap with other VNet. Take care to plan your
network configuration accordingly.
1. From a browser, navigate to the Azure portal and sign in with your Azure account.
2. Click Create a resource. In the Search the marketplace field, type 'virtual network'. Locate Virtual network
from the returned list and click to open the Virtual Network page.
3. Near the bottom of the Virtual Network page, from the Select a deployment model list, select Resource
Manager, and then click Create. This opens the 'Create virtual network' page.
4. On the Create virtual network page, configure the VNet settings. When you fill in the fields, the red
exclamation mark becomes a green check mark when the characters entered in the field are valid.
Name: Enter the name for your virtual network. In this example, we use VNet1.
Address space: Enter the address space. If you have multiple address spaces to add, add your first
address space. You can add additional address spaces later, after creating the VNet. Make sure that the
address space that you specify does not overlap with the address space for your on-premises location.
Subscription: Verify that the subscription listed is the correct one. You can change subscriptions by
using the drop-down.
Resource group: Select an existing resource group, or create a new one by typing a name for your new
resource group. If you are creating a new group, name the resource group according to your planned
configuration values. For more information about resource groups, visit Azure Resource Manager
Overview.
Location: Select the location for your VNet. The location determines where the resources that you
deploy to this VNet will reside.
Subnet: Add the first subnet name and subnet address range. You can add additional subnets and the
gateway subnet later, after creating this VNet.
5. Select Pin to dashboard if you want to be able to find your VNet easily on the dashboard, and then click
Create. After clicking Create, you will see a tile on your dashboard that will reflect the progress of your
VNet. The tile changes as the VNet is being created.
4. The Name for your subnet is automatically filled in with the value 'GatewaySubnet'. The GatewaySubnet
value is required in order for Azure to recognize the subnet as the gateway subnet. Adjust the auto-filled
Address range values to match your configuration requirements.
IMPORTANT
When working with gateway subnets, avoid associating a network security group (NSG) to the gateway subnet. Associating
a network security group to this subnet may cause your VPN gateway to stop functioning as expected. For more
information about network security groups, see What is a network security group?
4. Verify the settings. You can select Pin to dashboard at the bottom of the page if you want your gateway to
appear on the dashboard.
5. Click Create to begin creating the VPN gateway. The settings are validated and you'll see the "Deploying
Virtual network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need
to refresh your portal page to see the completed status.
After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in
the portal. The gateway appears as a connected device. You can click the connected device (your virtual network
gateway) to view more information.
3. On the Create local network gateway page, specify the values for your local network gateway.
Name: Specify a name for your local network gateway object.
IP address: This is the public IP address of the VPN device that you want Azure to connect to. Specify a
valid public IP address. The IP address cannot be behind NAT and has to be reachable by Azure. If you
don't have the IP address right now, you can use the values shown in the example, but you'll need to go
back and replace your placeholder IP address with the public IP address of your VPN device. Otherwise,
Azure will not be able to connect.
Address Space refers to the address ranges for the network that this local network represents. You can
add multiple address space ranges. Make sure that the ranges you specify here do not overlap with
ranges of other networks that you want to connect to. Azure will route the address range that you
specify to the on-premises VPN device IP address. Use your own values here if you want to connect to
your on-premises site, not the values shown in the example.
Configure BGP settings: Use only when configuring BGP. Otherwise, don't select this.
Subscription: Verify that the correct subscription is showing.
Resource Group: Select the resource group that you want to use. You can either create a new resource
group, or select one that you have already created.
Location: Select the location that this object will be created in. You may want to select the same location
that your VNet resides in, but you are not required to do so.
4. When you have finished specifying the values, click the Create button at the bottom of the page to create
the local network gateway.
foreach($Nic in $Nics)
{
$VM = $VMs | Where-Object -Property Id -eq $Nic.VirtualMachine.Id
$Prv = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAddress
$Alloc = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAllocationMethod
Write-Output "$($VM.Name): $Prv,$Alloc"
}
2. Verify that you are connected to your VNet using the VPN connection.
3. Open Remote Desktop Connection by typing "RDP" or "Remote Desktop Connection" in the search box on
the taskbar, then select Remote Desktop Connection. You can also open Remote Desktop Connection using the
'mstsc' command in PowerShell.
4. In Remote Desktop Connection, enter the private IP address of the VM. You can click "Show Options" to adjust
additional settings, then connect.
To troubleshoot an RDP connection to a VM
If you are having trouble connecting to a virtual machine over your VPN connection, check the following:
Verify that your VPN connection is successful.
Verify that you are connecting to the private IP address for the VM.
If you can connect to the VM using the private IP address, but not the computer name, verify that you have
configured DNS properly. For more information about how name resolution works for VMs, see Name
Resolution for VMs.
For more information about RDP connections, see Troubleshoot Remote Desktop connections to a VM.
Next steps
For information about BGP, see the BGP Overview and How to configure BGP.
For information about forced tunneling, see About forced tunneling.
For information about Highly Available Active-Active connections, see Highly Available cross-premises and
VNet-to-VNet connectivity.
For information about how to limit network traffic to resources in a virtual network, see Network Security.
For information about how Azure routes traffic between Azure, on-premises, and Internet resources, see
Virtual network traffic routing.
For information about creating a Site-to-Site VPN connection using Azure Resource Manager template, see
Create a Site-to-Site VPN Connection.
For information about creating a Vnet-to-Vnet VPN connection using Azure Resource Manager template, see
Deploy HBase geo replication.
Configure a Point-to-Site connection to a VNet using
native Azure certificate authentication: Azure portal
3/21/2018 • 27 minutes to read • Edit Online
This article helps you securely connect individual clients running Windows or Mac OS X to an Azure VNet. Point-
to-Site VPN connections are useful when you want to connect to your VNet from a remote location, such when you
are telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have
only a few clients that need to connect to a VNet. Point-to-Site connections do not require a VPN device or a
public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or
IKEv2. For more information about Point-to-Site VPN, see About Point-to-Site VPN.
Architecture
Point-to-Site native Azure certificate authentication connections use the following items, which you configure in
this exercise:
A RouteBased VPN gateway.
The public key (.cer file) for a root certificate, which is uploaded to Azure. Once the certificate is uploaded, it is
considered a trusted certificate and is used for authentication.
A client certificate that is generated from the root certificate. The client certificate installed on each client
computer that will connect to the VNet. This certificate is used for client authentication.
A VPN client configuration. The VPN client configuration files contain the necessary information for the client to
connect to the VNet. The files configure the existing VPN client that is native to the operating system. Each client
that connects must be configured using the settings in the configuration files.
Example values
You can use the following values to create a test environment, or refer to these values to better understand the
examples in this article:
VNet Name: VNet1
Address space: 192.168.0.0/16
For this example, we use only one address space. You can have more than one address space for your VNet.
Subnet name: FrontEnd
Subnet address range: 192.168.1.0/24
Subscription: If you have more than one subscription, verify that you are using the correct one.
Resource Group: TestRG
Location: East US
GatewaySubnet: 192.168.200.0/24
DNS Server: (optional) IP address of the DNS server that you want to use for name resolution.
Virtual network gateway name: VNet1GW
Gateway type: VPN
VPN type: Route-based
Public IP address name: VNet1GWpip
Connection type: Point-to-site
Client address pool: 172.16.201.0/24
VPN clients that connect to the VNet using this Point-to-Site connection receive an IP address from the client
address pool.
NOTE
If you want this VNet to connect to an on-premises location (in addition to creating a P2S configuration), you need to
coordinate with your on-premises network administrator to carve out an IP address range that you can use specifically for
this virtual network. If a duplicate address range exists on both sides of the VPN connection, traffic does not route the way
you may expect it to. Additionally, if you want to connect this VNet to another VNet, the address space cannot overlap with
other VNet. Take care to plan your network configuration accordingly.
1. From a browser, navigate to the Azure portal and, if necessary, sign in with your Azure account.
2. Click +. In the Search the marketplace field, type "Virtual Network". Locate Virtual Network from the
returned list and click to open the Virtual Network page.
3. Near the bottom of the Virtual Network page, from the Select a deployment model list, select Resource
Manager, and then click Create.
4. On the Create virtual network page, configure the VNet settings. When you fill in the fields, the red
exclamation mark becomes a green check mark when the characters entered in the field are valid. There may
be values that are auto-filled. If so, replace the values with your own. The Create virtual network page
looks similar to the following example:
12. After clicking Create, you will see a tile on your dashboard that will reflect the progress of your VNet. The
tile changes as the VNet is being created.
2. Add a gateway subnet
Before connecting your virtual network to a gateway, you first need to create the gateway subnet for the virtual
network to which you want to connect. The gateway services use the IP addresses specified in the gateway subnet.
If possible, create a gateway subnet using a CIDR block of /28 or /27 to provide enough IP addresses to
accommodate additional future configuration requirements.
1. In the portal, navigate to the Resource Manager virtual network for which you want to create a virtual network
gateway.
2. In the Settings section of your VNet page, click Subnets to expand the Subnets page.
3. On the Subnets page, click +Gateway subnet to open the Add subnet page.
4. The Name for your subnet is automatically filled in with the value 'GatewaySubnet'. This value is required in
order for Azure to recognize the subnet as the gateway subnet. Adjust the auto-filled Address range values
to match your configuration requirements, then click OK at the bottom of the page to create the subnet.
3. On the Create virtual network gateway page, specify the values for your virtual network gateway.
Name: Name your gateway. This is not the same as naming a gateway subnet. It's the name of the
gateway object you are creating.
Gateway type: Select VPN. VPN gateways use the virtual network gateway type VPN.
VPN type: Select the VPN type that is specified for your configuration. Most configurations require a
Route-based VPN type.
SKU: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the VPN
type you select. For more information about gateway SKUs, see Gateway SKUs.
Location: You may need to scroll to see Location. Adjust the Location field to point to the location
where your virtual network is located. If the location is not pointing to the region where your virtual
network resides, when you select a virtual network in the next step, it will not appear in the drop-down
list.
Virtual network: Choose the virtual network to which you want to add this gateway. Click Virtual
network to open the 'Choose a virtual network' page. Select the VNet. If you don't see your VNet, make
sure the Location field is pointing to the region in which your virtual network is located.
Gateway subnet address range: You will only see this setting if you did not previously create a gateway
subnet for your virtual network. If you previously created a valid gateway subnet, this setting will not
appear.
First IP configuration: The 'Choose public IP address' page creates a public IP address object that
gets associated to the VPN gateway. The public IP address is dynamically assigned to this object
when the VPN gateway is created. VPN Gateway currently only supports Dynamic Public IP address
allocation. However, this does not mean that the IP address changes after it has been assigned to your
VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-
created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your
VPN gateway.
First, click Create gateway IP configuration to open the 'Choose public IP address' page, then
click +Create new to open the 'Create public IP address' page.
Next, input a Name for your public IP address. Leave the SKU as Basic unless there is a
specific reason to change it to something else, then click OK at the bottom of this page to save
your changes.
4. Verify the settings. You can select Pin to dashboard at the bottom of the page if you want your gateway to
appear on the dashboard.
5. Click Create to begin creating the VPN gateway. The settings are validated and you'll see the "Deploying Virtual
network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need to refresh
your portal page to see the completed status.
After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in
the portal. The gateway appears as a connected device. You can click the connected device (your virtual network
gateway) to view more information.
NOTE
The Basic SKU does not support IKEv2 or RADIUS authentication.
5. Generate certificates
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection.
Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then
considered 'trusted' by Azure for connection over P2S to the virtual network. You also generate client certificates
from the trusted root certificate, and then install them on each client computer. The client certificate is used to
authenticate the client when it initiates a connection to the VNet.
1. Obtain the .cer file for the root certificate
You can use either a root certificate that was generated using an enterprise solution (recommended), or you can
generate a self-signed certificate. After creating the root certificate, export the public certificate data (not the private
key) as a Base-64 encoded X.509 .cer file and upload the public certificate data to Azure.
Enterprise certificate: If you are using an enterprise solution, you can use your existing certificate chain.
Obtain the .cer file for the root certificate that you want to use.
Self-signed root certificate: If you aren't using an enterprise certificate solution, you need to create a self-
signed root certificate. It's important that you follow the steps in one of the P2S certificate articles below.
Otherwise, the certificates you create won't be compatible with P2S connections and clients receive a
connection error when trying to connect. You can use Azure PowerShell, MakeCert, or OpenSSL. The steps
in the provided articles generate a compatible certificate:
Windows 10 PowerShell instructions: These instructions require Windows 10 and PowerShell to
generate certificates. Client certificates that are generated from the root certificate can be installed on any
supported P2S client.
MakeCert instructions: Use MakeCert if you don't have access to a Windows 10 computer to use to
generate certificates. MakeCert deprecated, but you can still use MakeCert to generate certificates. Client
certificates that are generated from the root certificate can be installed on any supported P2S client.
2. Generate a client certificate
Each client computer that connects to a VNet using Point-to-Site must have a client certificate installed. The client
certificate is generated from the root certificate and installed on each client computer. If a valid client certificate is
not installed and the client tries to connect to the VNet, authentication fails.
You can either generate a unique certificate for each client, or you can use the same certificate for multiple clients.
The advantage to generating unique client certificates is the ability to revoke a single certificate. Otherwise, if
multiple clients are using the same client certificate and you need to revoke it, you have to generate and install new
certificates for all the clients that use that certificate to authenticate.
You can generate client certificates using the following methods:
Enterprise certificate:
If you are using an enterprise certificate solution, generate a client certificate with the common name
value format 'name@yourdomain.com', rather than the 'domain name\username' format.
Make sure the client certificate is based on the 'User' certificate template that has 'Client Authentication'
as the first item in the use list, rather than Smart Card Logon, etc. You can check the certificate by double-
clicking the client certificate and viewing Details > Enhanced Key Usage.
Self-signed root certificate: It's important that you follow the steps in one of the P2S certificate articles
below. Otherwise, the client certificates you create won't be compatible with P2S connections and clients
receive an error when trying to connect. The steps in either of the following articles generate a compatible
client certificate:
Windows 10 PowerShell instructions: These instructions require Windows 10 and PowerShell to
generate certificates. The certificates that are generated can be installed on any supported P2S client.
MakeCert instructions: Use MakeCert if you don't have access to a Windows 10 computer to use to
generate certificates. MakeCert deprecated, but you can still use MakeCert to generate certificates. The
certificates that are generated can be installed on any supported P2S client.
When you generate a client certificate from a self-signed root certificate using the preceding instructions, it's
automatically installed on the computer that you used to generate it. If you want to install a client certificate
on another client computer, you need to export it as a .pfx, along with the entire certificate chain. This creates
a .pfx file that contains the root certificate information that is required for the client to successfully
authenticate. For steps to export a certificate, see Certificates - export a client certificate.
3. On the Point-to-site configuration page, in the Address pool box, add the private IP address range that
you want to use. VPN clients dynamically receive an IP address from the range that you specify. Click Save
to validate and save the setting.
NOTE
If you don't see Tunnel type or Authentication type in the portal on this page, your gateway is using the Basic SKU.
The Basic SKU does not support IKEv2 or RADIUS authentication.
4. Paste the certificate data into the Public Certificate Data field. Name the certificate, and then click Save.
You can add up to 20 trusted root certificates.
5. Click Save at the top of the page to save all of the configuration settings.
10. Install an exported client certificate
If you want to create a P2S connection from a client computer other than the one you used to generate the client
certificates, you need to install a client certificate. When installing a client certificate, you need the password that
was created when the client certificate was exported.
Make sure the client certificate was exported as a .pfx along with the entire certificate chain (which is the default).
Otherwise, the root certificate information isn't present on the client computer and the client won't be able to
authenticate properly.
For install steps, see Install a client certificate.
NOTE
You must have Administrator rights on the Windows client computer from which you are connecting.
1. To connect to your VNet, on the client computer, navigate to VPN connections and locate the VPN
connection that you created. It is named the same name as your virtual network. Click Connect. A pop-up
message may appear that refers to using the certificate. Click Continue to use elevated privileges.
2. On the Connection status page, click Connect to start the connection. If you see a Select Certificate
screen, verify that the client certificate showing is the one that you want to use to connect. If it is not, use the
drop-down arrow to select the correct certificate, and then click OK.
3. Your connection is established.
foreach($Nic in $Nics)
{
$VM = $VMs | Where-Object -Property Id -eq $Nic.VirtualMachine.Id
$Prv = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAddress
$Alloc = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAllocationMethod
Write-Output "$($VM.Name): $Prv,$Alloc"
}
2. Verify that you are connected to your VNet using the Point-to-Site VPN connection.
3. Open Remote Desktop Connection by typing "RDP" or "Remote Desktop Connection" in the search box on
the taskbar, then select Remote Desktop Connection. You can also open Remote Desktop Connection using the
'mstsc' command in PowerShell.
4. In Remote Desktop Connection, enter the private IP address of the VM. You can click "Show Options" to adjust
additional settings, then connect.
To troubleshoot an RDP connection to a VM
If you are having trouble connecting to a virtual machine over your VPN connection, check the following:
Verify that your VPN connection is successful.
Verify that you are connecting to the private IP address for the VM.
Use 'ipconfig' to check the IPv4 address assigned to the Ethernet adapter on the computer from which you are
connecting. If the IP address is within the address range of the VNet that you are connecting to, or within the
address range of your VPNClientAddressPool, this is referred to as an overlapping address space. When your
address space overlaps in this way, the network traffic doesn't reach Azure, it stays on the local network.
If you can connect to the VM using the private IP address, but not the computer name, verify that you have
configured DNS properly. For more information about how name resolution works for VMs, see Name
Resolution for VMs.
Verify that the VPN client configuration package was generated after the DNS server IP addresses were
specified for the VNet. If you updated the DNS server IP addresses, generate and install a new VPN client
configuration package.
For more information about RDP connections, see Troubleshoot Remote Desktop connections to a VM.
Point-to-Site FAQ
What client operating systems can I use with Point-to -Site?
The following client operating systems are supported:
Windows 7 (32-bit and 64-bit)
Windows Server 2008 R2 (64-bit only)
Windows 8.1 (32-bit and 64-bit)
Windows Server 2012 (64-bit only)
Windows Server 2012 R2 (64-bit only)
Windows Server 2016 (64-bit only)
Windows 10
Mac OS X version 10.11 (El Capitan)
Mac OS X version 10.12 (Sierra)
Linux (StrongSwan)
iOS
NOTE
Starting July 1, 2018, support is being removed for TLS 1.0 and 1.1 from Azure VPN Gateway. VPN Gateway will support only
TLS 1.2. To maintain TLS support and connectivity for your Windows 7 and Windows 8 point-to-site clients that use TLS, we
recommend that you install the following updates:
• Update for Microsoft EAP implementation that enables the use of TLS
• Update to enable TLS 1.1 and TLS 1.2 as default secure protocols in WinHTTP
The following legacy algorithms will also be deprecated for TLS on July 1, 2018:
RC4 (Rivest Cipher 4)
DES (Data Encryption Algorithm)
3DES (Triple Data Encryption Algorithm)
MD5 (Message Digest 5)
SHA-1 (Secure Hash Algorithm 1)
How many VPN client endpoints can I have in my Point-to -Site configuration?
We support up to 128 VPN clients to be able to connect to a virtual network at the same time.
Can I traverse proxies and firewalls using Point-to -Site capability?
Azure supports two types of Point-to-site VPN options:
Secure Socket Tunneling Protocol (SSTP ). SSTP is a Microsoft proprietary SSL -based solution that can
penetrate firewalls since most firewalls open the TCP port that 443 SSL uses.
IKEv2 VPN. IKEv2 VPN is a standards-based IPsec VPN solution that uses UDP port 500 and 4500 and IP
protocol no. 50. Firewalls do not always open these ports, so there is a possibility of IKEv2 VPN not being
able to traverse proxies and firewalls.
If I restart a client computer configured for Point-to -Site, will the VPN automatically reconnect?
By default, the client computer will not reestablish the VPN connection automatically.
Does Point-to -Site support auto -reconnect and DDNS on the VPN clients?
Auto-reconnect and DDNS are currently not supported in Point-to-Site VPNs.
Can I have Site -to -Site and Point-to -Site configurations coexist for the same virtual network?
Yes. For the Resource Manager deployment model, you must have a RouteBased VPN type for your gateway. For
the classic deployment model, you need a dynamic gateway. We do not support Point-to-Site for static routing
VPN gateways or PolicyBased VPN gateways.
Can I configure a Point-to -Site client to connect to multiple virtual networks at the same time?
No. A Point-to-Site client can only connect to resources in the VNet in which the virtual network gateway resides.
How much throughput can I expect through Site -to -Site or Point-to -Site connections?
It's difficult to maintain the exact throughput of the VPN tunnels. IPsec and SSTP are crypto-heavy VPN protocols.
Throughput is also limited by the latency and bandwidth between your premises and the Internet. For a VPN
Gateway with only IKEv2 Point-to-Site VPN connections, the total throughput that you can expect depends on the
Gateway SKU. For more information on throughput, see Gateway SKUs.
Can I use any software VPN client for Point-to -Site that supports SSTP and/or IKEv2?
No. You can only use the native VPN client on Windows for SSTP, and the native VPN client on Mac for IKEv2.
Refer to the list of supported client operating systems.
Does Azure support IKEv2 VPN with Windows?
IKEv2 is supported on Windows 10 and Server 2016. However, in order to use IKEv2, you must install updates and
set a registry key value locally. OS versions prior to Windows 10 are not supported and can only use SSTP.
To prepare Windows 10 or Server 2016 for IKEv2:
1. Install the update.
Next steps
Once your connection is complete, you can add virtual machines to your virtual networks. For more information,
see Virtual Machines. To understand more about networking and virtual machines, see Azure and Linux VM
network overview.
For P2S troubleshooting information, Troubleshooting Azure point-to-site connections.
Create and modify an ExpressRoute circuit
2/16/2018 • 5 minutes to read • Edit Online
This article describes how to create an Azure ExpressRoute circuit by using the Azure portal and the Azure
Resource Manager deployment model. The following steps also show you how to check the status of the circuit,
update it, or delete and deprovision it.
IMPORTANT
Your ExpressRoute circuit is billed from the moment a service key is issued. Ensure that you perform this operation when the
connectivity provider is ready to provision the circuit.
1. You can create an ExpressRoute circuit by selecting the option to create a new resource. Click Create a
resource > Networking > ExpressRoute, as shown in the following image:
2. After you click ExpressRoute, you'll see the Create ExpressRoute circuit page. When you're filling in the
values on this page, make sure that you specify the correct SKU tier (Standard, or Premium) and data
metering billing model (Unlimited or Metered).
Tier determines whether an ExpressRoute standard or an ExpressRoute premium add-on is enabled. You
can specify Standard to get the standard SKU or Premium for the premium add-on.
Data metering determines the billing type. You can specify Metered for a metered data plan and
Unlimited for an unlimited data plan. Note that you can change the billing type from Metered to
Unlimited, but you can't change the type from Unlimited to Metered.
Peering Location is the physical location where you are peering with Microsoft.
IMPORTANT
The Peering Location indicates the physical location where you are peering with Microsoft. This is not linked
to "Location" property, which refers to the geography where the Azure Network Resource Provider is located.
While they are not related, it is a good practice to choose a Network Resource Provider geographically close
to the Peering Location of the circuit.
The circuit changes to the following state when the connectivity provider is in the process of enabling it for you:
Provider status: Provisioning
Circuit status: Enabled
For you to be able to use an ExpressRoute circuit, it must be in the following state:
Provider status: Provisioned
Circuit status: Enabled
5. Periodically check the status and the state of the circuit key
You can view the properties of the circuit that you're interested in by selecting it. Check the Provider status and
ensure that it has moved to Provisioned before you continue.
IMPORTANT
These instructions only apply to circuits that are created with service providers that offer layer 2 connectivity services. If
you're using a service provider that offers managed layer 3 services (typically an IP VPN, like MPLS), your connectivity
provider configures and manages routing for you.
IMPORTANT
You may have to recreate the ExpressRoute circuit if there is inadequate capacity on the existing port. You cannot upgrade
the circuit if there is no additional capacity available at that location.
Although you can seamlessly upgrade the bandwidth, you cannot reduce the bandwidth of an ExpressRoute circuit without
disruption. Downgrading bandwidth requires you to deprovision the ExpressRoute circuit and then reprovision a new
ExpressRoute circuit.
Disabling the Premium add-on operation can fail if you're using resources that are greater than what is permitted for the
standard circuit.
Next steps
After you create your circuit, continue with the following next steps:
Create and modify routing for your ExpressRoute circuit
Link your virtual network to your ExpressRoute circuit
Network monitoring solutions
6/7/2018 • 3 minutes to read • Edit Online
Azure offers a host of solutions to monitor your networking assets. Azure has solutions and utilities to monitor
network connectivity, the health of ExpressRoute circuits, and analyze network traffic in the cloud.
Performance Monitor
Performance Monitor is part of NPM and is network monitoring for cloud, hybrid, and on-premises environments.
You can monitor network connectivity across remote branch and field offices, store locations, data centers, and
clouds. You can detect network issues before your users complain. The key advantages are:
Monitor loss and latency across various subnets and set alerts
Monitor all paths (including redundant paths) on the network
Troubleshoot transient and point-in-time network issues, that are difficult to replicate
Determine the specific segment on the network, that is responsible for degraded performance
Monitor the health of the network, without the need for SNMP
Traffic Analytics
Traffic Analytics is a cloud-based solution that provides visibility into user and application activity on your cloud
networks. NSG Flow logs are analyzed to provide insights into:
Traffic flows across your networks between Azure and Internet, public cloud regions, VNETs, and subnets
Applications and protocols on your network, without the need for sniffers or dedicated flow collector appliances
Top talkers, chatty applications, VM conversations in the cloud, traffic hotspots
Sources and destinations of traffic across VNETs, inter-relationships between critical business services and
applications
Security – malicious traffic, ports open to the Internet, applications or VMs attempting Internet access…
Capacity utilization - helps you eliminate issues of over-provisioning or underutilization by monitoring
utilization trends of VPN gateways and other services
Traffic Analytics equips you with actionable information that helps you audit your organization’s network activity,
secure applications and data, optimize workload performance and stay compliant.
Related links:
Blog post, Documentation, FAQ
DNS Analytics
Built for DNS Administrators, this solution collects, analyzes, and correlates DNS logs to provide security,
operations, and performance-related insights. Some of the capabilities are:
Identification of clients that try to resolve to malicious domains
Identification of stale resource records
Visibility into frequently queried domain names and talkative DNS clients
Visibility into the request load on DNS servers
Monitoring of dynamic DNS registration failures
Related links:
Blog post, Documentation
Miscellaneous
New Pricing
Check resource usage against limits
6/6/2018 • 3 minutes to read • Edit Online
In this article, you learn how to see the number of each network resource type that you've deployed in your
subscription and what your subscription limits are. The ability to view resource usage against limits is helpful to
track current usage, and plan for future use. You can use the Azure Portal, PowerShell, or the Azure CLI to track
usage.
Azure Portal
1. Log into the Azure portal.
2. At the top, left corner of the Azure portal, select All services.
3. Enter Subscriptions in the Filter box. When Subscriptions appears in the search results, select it.
4. Select the name of the subscription you want to view usage information for.
5. Under SETTINGS, select Usage + quota.
6. You can select the following options:
Resource types: You can select all resource types, or select the specific types of resources you want to
view.
Providers: You can select all resource providers, or select Compute, Network, or Storage.
Locations: You can select all Azure locations, or select specific locations.
You can select to show all resources, or only the resources where at least one is deployed.
The example in the following picture shows all of the network resources with at least one resource
deployed in the East US:
You can sort the columns by selecting the column heading. The limits shown are the limits for your
subscription. If you need to increase a default limit, select Request Increase, then complete and
submit the support request. All resources have a maximum limit listed in Azure limits. If your current
limit is already at the maximum number, the limit can't be increased.
PowerShell
You can run the commands that follow in the Azure Cloud Shell, or by running PowerShell from your computer.
The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with
your account. If you run PowerShell from your computer, you need the AzureRM PowerShell module, version 6.0.1
or later. Run Get-Module -ListAvailable AzureRM on your computer, to find the installed version. If you need to
upgrade, see Install Azure PowerShell module. If you're running PowerShell locally, you also need to run
Login-AzureRmAccount to log in to Azure.
View your usage against limits with Get-AzureRmNetworkUsage. The following example gets the usage for
resources where at least one resource is deployed in the East US location:
Get-AzureRmNetworkUsage `
-Location eastus `
| Where-Object {$_.CurrentValue -gt 0} `
| Format-Table ResourceType, CurrentValue, Limit
You receive output formatted the same as the following example output:
Azure CLI
If using Azure Command-line interface (CLI) commands to complete tasks in this article, either run the commands
in the Azure Cloud Shell, or by running the CLI from your computer. This article requires the Azure CLI version
2.0.32 or later. Run az --version to find the installed version. If you need to install or upgrade, see Install Azure
CLI 2.0. If you're running the Azure CLI locally, you also need to run az login to log in to Azure.
View your usage against limits with az network list-usages. The following example gets the usage for resources in
the East US location:
az network list-usages \
--location eastus \
--out table
You receive output formatted the same as the following example output:
The following table includes links to bash scripts built using the Azure CLI.
Create a virtual network for multi-tier applications Creates a virtual network with front-end and back-end
subnets. Traffic to the front-end subnet is limited to HTTP and
SSH, while traffic to the back-end subnet is limited to MySQL,
port 3306.
Peer two virtual networks Creates and connects two virtual networks in the same region.
Route traffic through a network virtual appliance Creates a virtual network with front-end and back-end
subnets and a VM that is able to route traffic between the two
subnets.
Filter inbound and outbound VM network traffic Creates a virtual network with front-end and back-end
subnets. Inbound network traffic to the front-end subnet is
limited to HTTP, HTTPS and SSH. Outbound traffic to the
Internet from the back-end subnet is not permitted.
Load balance traffic to VMs for high availability Creates several virtual machines in a highly available and load
balanced configuration.
Load balance multiple websites on VMs Creates two VMs with multiple IP configurations, joined to an
Azure Availability Set, accessible through an Azure Load
Balancer.
Direct traffic across multiple regions for high application Creates two app service plans, two web apps, a traffic
availability manager profile, and two traffic manager endpoints.
Azure PowerShell Samples for networking
6/27/2017 • 2 minutes to read • Edit Online
The following table includes links to scripts built using Azure PowerShell.
Create a virtual network for multi-tier applications Creates a virtual network with front-end and back-end
subnets. Traffic to the front-end subnet is limited to HTTP,
while traffic to the back-end subnet is limited to SQL, port
1433.
Peer two virtual networks Creates and connects two virtual networks in the same region.
Route traffic through a network virtual appliance Creates a virtual network with front-end and back-end
subnets and a VM that is able to route traffic between the two
subnets.
Filter inbound and outbound VM network traffic Creates a virtual network with front-end and back-end
subnets. Inbound network traffic to the front-end subnet is
limited to HTTP and HTTPS.. Outbound traffic to the Internet
from the back-end subnet is not permitted.
Load balance traffic to VMs for high availability Creates several virtual machines in a highly available and load
balanced configuration.
Load balance multiple websites on VMs Creates two VMs with multiple IP configurations, joined to an
Azure Availability Set, accessible through an Azure Load
Balancer.
Direct traffic across multiple regions for high application Creates two app service plans, two web apps, a traffic
availability manager profile, and two traffic manager endpoints.
2 minutes to read
Configure a Point-to-Site connection to a VNet using
native Azure certificate authentication: Azure portal
3/21/2018 • 27 minutes to read • Edit Online
This article helps you securely connect individual clients running Windows or Mac OS X to an Azure VNet. Point-
to-Site VPN connections are useful when you want to connect to your VNet from a remote location, such when
you are telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you
have only a few clients that need to connect to a VNet. Point-to-Site connections do not require a VPN device or a
public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol),
or IKEv2. For more information about Point-to-Site VPN, see About Point-to-Site VPN.
Architecture
Point-to-Site native Azure certificate authentication connections use the following items, which you configure in
this exercise:
A RouteBased VPN gateway.
The public key (.cer file) for a root certificate, which is uploaded to Azure. Once the certificate is uploaded, it is
considered a trusted certificate and is used for authentication.
A client certificate that is generated from the root certificate. The client certificate installed on each client
computer that will connect to the VNet. This certificate is used for client authentication.
A VPN client configuration. The VPN client configuration files contain the necessary information for the client
to connect to the VNet. The files configure the existing VPN client that is native to the operating system. Each
client that connects must be configured using the settings in the configuration files.
Example values
You can use the following values to create a test environment, or refer to these values to better understand the
examples in this article:
VNet Name: VNet1
Address space: 192.168.0.0/16
For this example, we use only one address space. You can have more than one address space for your VNet.
Subnet name: FrontEnd
Subnet address range: 192.168.1.0/24
Subscription: If you have more than one subscription, verify that you are using the correct one.
Resource Group: TestRG
Location: East US
GatewaySubnet: 192.168.200.0/24
DNS Server: (optional) IP address of the DNS server that you want to use for name resolution.
Virtual network gateway name: VNet1GW
Gateway type: VPN
VPN type: Route-based
Public IP address name: VNet1GWpip
Connection type: Point-to-site
Client address pool: 172.16.201.0/24
VPN clients that connect to the VNet using this Point-to-Site connection receive an IP address from the client
address pool.
NOTE
If you want this VNet to connect to an on-premises location (in addition to creating a P2S configuration), you need to
coordinate with your on-premises network administrator to carve out an IP address range that you can use specifically for
this virtual network. If a duplicate address range exists on both sides of the VPN connection, traffic does not route the way
you may expect it to. Additionally, if you want to connect this VNet to another VNet, the address space cannot overlap with
other VNet. Take care to plan your network configuration accordingly.
1. From a browser, navigate to the Azure portal and, if necessary, sign in with your Azure account.
2. Click +. In the Search the marketplace field, type "Virtual Network". Locate Virtual Network from the
returned list and click to open the Virtual Network page.
3. Near the bottom of the Virtual Network page, from the Select a deployment model list, select Resource
Manager, and then click Create.
4. On the Create virtual network page, configure the VNet settings. When you fill in the fields, the red
exclamation mark becomes a green check mark when the characters entered in the field are valid. There
may be values that are auto-filled. If so, replace the values with your own. The Create virtual network
page looks similar to the following example:
12. After clicking Create, you will see a tile on your dashboard that will reflect the progress of your VNet. The
tile changes as the VNet is being created.
2. Add a gateway subnet
Before connecting your virtual network to a gateway, you first need to create the gateway subnet for the virtual
network to which you want to connect. The gateway services use the IP addresses specified in the gateway subnet.
If possible, create a gateway subnet using a CIDR block of /28 or /27 to provide enough IP addresses to
accommodate additional future configuration requirements.
1. In the portal, navigate to the Resource Manager virtual network for which you want to create a virtual network
gateway.
2. In the Settings section of your VNet page, click Subnets to expand the Subnets page.
3. On the Subnets page, click +Gateway subnet to open the Add subnet page.
4. The Name for your subnet is automatically filled in with the value 'GatewaySubnet'. This value is required
in order for Azure to recognize the subnet as the gateway subnet. Adjust the auto-filled Address range
values to match your configuration requirements, then click OK at the bottom of the page to create the
subnet.
3. On the Create virtual network gateway page, specify the values for your virtual network gateway.
Name: Name your gateway. This is not the same as naming a gateway subnet. It's the name of the
gateway object you are creating.
Gateway type: Select VPN. VPN gateways use the virtual network gateway type VPN.
VPN type: Select the VPN type that is specified for your configuration. Most configurations require a
Route-based VPN type.
SKU: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the
VPN type you select. For more information about gateway SKUs, see Gateway SKUs.
Location: You may need to scroll to see Location. Adjust the Location field to point to the location
where your virtual network is located. If the location is not pointing to the region where your virtual
network resides, when you select a virtual network in the next step, it will not appear in the drop-down
list.
Virtual network: Choose the virtual network to which you want to add this gateway. Click Virtual
network to open the 'Choose a virtual network' page. Select the VNet. If you don't see your VNet, make
sure the Location field is pointing to the region in which your virtual network is located.
Gateway subnet address range: You will only see this setting if you did not previously create a
gateway subnet for your virtual network. If you previously created a valid gateway subnet, this setting
will not appear.
First IP configuration: The 'Choose public IP address' page creates a public IP address object that
gets associated to the VPN gateway. The public IP address is dynamically assigned to this object
when the VPN gateway is created. VPN Gateway currently only supports Dynamic Public IP address
allocation. However, this does not mean that the IP address changes after it has been assigned to
your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and
re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of
your VPN gateway.
First, click Create gateway IP configuration to open the 'Choose public IP address' page, then
click +Create new to open the 'Create public IP address' page.
Next, input a Name for your public IP address. Leave the SKU as Basic unless there is a
specific reason to change it to something else, then click OK at the bottom of this page to save
your changes.
4. Verify the settings. You can select Pin to dashboard at the bottom of the page if you want your gateway to
appear on the dashboard.
5. Click Create to begin creating the VPN gateway. The settings are validated and you'll see the "Deploying
Virtual network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need
to refresh your portal page to see the completed status.
After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in
the portal. The gateway appears as a connected device. You can click the connected device (your virtual network
gateway) to view more information.
NOTE
The Basic SKU does not support IKEv2 or RADIUS authentication.
5. Generate certificates
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection.
Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then
considered 'trusted' by Azure for connection over P2S to the virtual network. You also generate client certificates
from the trusted root certificate, and then install them on each client computer. The client certificate is used to
authenticate the client when it initiates a connection to the VNet.
1. Obtain the .cer file for the root certificate
You can use either a root certificate that was generated using an enterprise solution (recommended), or you can
generate a self-signed certificate. After creating the root certificate, export the public certificate data (not the
private key) as a Base-64 encoded X.509 .cer file and upload the public certificate data to Azure.
Enterprise certificate: If you are using an enterprise solution, you can use your existing certificate chain.
Obtain the .cer file for the root certificate that you want to use.
Self-signed root certificate: If you aren't using an enterprise certificate solution, you need to create a self-
signed root certificate. It's important that you follow the steps in one of the P2S certificate articles below.
Otherwise, the certificates you create won't be compatible with P2S connections and clients receive a
connection error when trying to connect. You can use Azure PowerShell, MakeCert, or OpenSSL. The steps
in the provided articles generate a compatible certificate:
Windows 10 PowerShell instructions: These instructions require Windows 10 and PowerShell to
generate certificates. Client certificates that are generated from the root certificate can be installed on
any supported P2S client.
MakeCert instructions: Use MakeCert if you don't have access to a Windows 10 computer to use to
generate certificates. MakeCert deprecated, but you can still use MakeCert to generate certificates.
Client certificates that are generated from the root certificate can be installed on any supported P2S
client.
2. Generate a client certificate
Each client computer that connects to a VNet using Point-to-Site must have a client certificate installed. The client
certificate is generated from the root certificate and installed on each client computer. If a valid client certificate is
not installed and the client tries to connect to the VNet, authentication fails.
You can either generate a unique certificate for each client, or you can use the same certificate for multiple clients.
The advantage to generating unique client certificates is the ability to revoke a single certificate. Otherwise, if
multiple clients are using the same client certificate and you need to revoke it, you have to generate and install
new certificates for all the clients that use that certificate to authenticate.
You can generate client certificates using the following methods:
Enterprise certificate:
If you are using an enterprise certificate solution, generate a client certificate with the common name
value format 'name@yourdomain.com', rather than the 'domain name\username' format.
Make sure the client certificate is based on the 'User' certificate template that has 'Client Authentication'
as the first item in the use list, rather than Smart Card Logon, etc. You can check the certificate by
double-clicking the client certificate and viewing Details > Enhanced Key Usage.
Self-signed root certificate: It's important that you follow the steps in one of the P2S certificate articles
below. Otherwise, the client certificates you create won't be compatible with P2S connections and clients
receive an error when trying to connect. The steps in either of the following articles generate a compatible
client certificate:
Windows 10 PowerShell instructions: These instructions require Windows 10 and PowerShell to
generate certificates. The certificates that are generated can be installed on any supported P2S client.
MakeCert instructions: Use MakeCert if you don't have access to a Windows 10 computer to use to
generate certificates. MakeCert deprecated, but you can still use MakeCert to generate certificates. The
certificates that are generated can be installed on any supported P2S client.
When you generate a client certificate from a self-signed root certificate using the preceding instructions,
it's automatically installed on the computer that you used to generate it. If you want to install a client
certificate on another client computer, you need to export it as a .pfx, along with the entire certificate chain.
This creates a .pfx file that contains the root certificate information that is required for the client to
successfully authenticate. For steps to export a certificate, see Certificates - export a client certificate.
3. On the Point-to-site configuration page, in the Address pool box, add the private IP address range that
you want to use. VPN clients dynamically receive an IP address from the range that you specify. Click Save
to validate and save the setting.
NOTE
If you don't see Tunnel type or Authentication type in the portal on this page, your gateway is using the Basic SKU.
The Basic SKU does not support IKEv2 or RADIUS authentication.
4. Paste the certificate data into the Public Certificate Data field. Name the certificate, and then click Save.
You can add up to 20 trusted root certificates.
5. Click Save at the top of the page to save all of the configuration settings.
10. Install an exported client certificate
If you want to create a P2S connection from a client computer other than the one you used to generate the client
certificates, you need to install a client certificate. When installing a client certificate, you need the password that
was created when the client certificate was exported.
Make sure the client certificate was exported as a .pfx along with the entire certificate chain (which is the default).
Otherwise, the root certificate information isn't present on the client computer and the client won't be able to
authenticate properly.
For install steps, see Install a client certificate.
NOTE
You must have Administrator rights on the Windows client computer from which you are connecting.
1. To connect to your VNet, on the client computer, navigate to VPN connections and locate the VPN
connection that you created. It is named the same name as your virtual network. Click Connect. A pop-up
message may appear that refers to using the certificate. Click Continue to use elevated privileges.
2. On the Connection status page, click Connect to start the connection. If you see a Select Certificate
screen, verify that the client certificate showing is the one that you want to use to connect. If it is not, use
the drop-down arrow to select the correct certificate, and then click OK.
3. Your connection is established.
foreach($Nic in $Nics)
{
$VM = $VMs | Where-Object -Property Id -eq $Nic.VirtualMachine.Id
$Prv = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAddress
$Alloc = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAllocationMethod
Write-Output "$($VM.Name): $Prv,$Alloc"
}
2. Verify that you are connected to your VNet using the Point-to-Site VPN connection.
3. Open Remote Desktop Connection by typing "RDP" or "Remote Desktop Connection" in the search box on
the taskbar, then select Remote Desktop Connection. You can also open Remote Desktop Connection using the
'mstsc' command in PowerShell.
4. In Remote Desktop Connection, enter the private IP address of the VM. You can click "Show Options" to adjust
additional settings, then connect.
To troubleshoot an RDP connection to a VM
If you are having trouble connecting to a virtual machine over your VPN connection, check the following:
Verify that your VPN connection is successful.
Verify that you are connecting to the private IP address for the VM.
Use 'ipconfig' to check the IPv4 address assigned to the Ethernet adapter on the computer from which you are
connecting. If the IP address is within the address range of the VNet that you are connecting to, or within the
address range of your VPNClientAddressPool, this is referred to as an overlapping address space. When your
address space overlaps in this way, the network traffic doesn't reach Azure, it stays on the local network.
If you can connect to the VM using the private IP address, but not the computer name, verify that you have
configured DNS properly. For more information about how name resolution works for VMs, see Name
Resolution for VMs.
Verify that the VPN client configuration package was generated after the DNS server IP addresses were
specified for the VNet. If you updated the DNS server IP addresses, generate and install a new VPN client
configuration package.
For more information about RDP connections, see Troubleshoot Remote Desktop connections to a VM.
Point-to-Site FAQ
What client operating systems can I use with Point-to -Site?
The following client operating systems are supported:
Windows 7 (32-bit and 64-bit)
Windows Server 2008 R2 (64-bit only)
Windows 8.1 (32-bit and 64-bit)
Windows Server 2012 (64-bit only)
Windows Server 2012 R2 (64-bit only)
Windows Server 2016 (64-bit only)
Windows 10
Mac OS X version 10.11 (El Capitan)
Mac OS X version 10.12 (Sierra)
Linux (StrongSwan)
iOS
NOTE
Starting July 1, 2018, support is being removed for TLS 1.0 and 1.1 from Azure VPN Gateway. VPN Gateway will support
only TLS 1.2. To maintain TLS support and connectivity for your Windows 7 and Windows 8 point-to-site clients that use
TLS, we recommend that you install the following updates:
• Update for Microsoft EAP implementation that enables the use of TLS
• Update to enable TLS 1.1 and TLS 1.2 as default secure protocols in WinHTTP
The following legacy algorithms will also be deprecated for TLS on July 1, 2018:
RC4 (Rivest Cipher 4)
DES (Data Encryption Algorithm)
3DES (Triple Data Encryption Algorithm)
MD5 (Message Digest 5)
SHA-1 (Secure Hash Algorithm 1)
How many VPN client endpoints can I have in my Point-to -Site configuration?
We support up to 128 VPN clients to be able to connect to a virtual network at the same time.
Can I traverse proxies and firewalls using Point-to -Site capability?
Azure supports two types of Point-to-site VPN options:
Secure Socket Tunneling Protocol (SSTP ). SSTP is a Microsoft proprietary SSL -based solution that can
penetrate firewalls since most firewalls open the TCP port that 443 SSL uses.
IKEv2 VPN. IKEv2 VPN is a standards-based IPsec VPN solution that uses UDP port 500 and 4500 and IP
protocol no. 50. Firewalls do not always open these ports, so there is a possibility of IKEv2 VPN not being
able to traverse proxies and firewalls.
If I restart a client computer configured for Point-to -Site, will the VPN automatically reconnect?
By default, the client computer will not reestablish the VPN connection automatically.
Does Point-to -Site support auto -reconnect and DDNS on the VPN clients?
Auto-reconnect and DDNS are currently not supported in Point-to-Site VPNs.
Can I have Site -to -Site and Point-to -Site configurations coexist for the same virtual network?
Yes. For the Resource Manager deployment model, you must have a RouteBased VPN type for your gateway. For
the classic deployment model, you need a dynamic gateway. We do not support Point-to-Site for static routing
VPN gateways or PolicyBased VPN gateways.
Can I configure a Point-to -Site client to connect to multiple virtual networks at the same time?
No. A Point-to-Site client can only connect to resources in the VNet in which the virtual network gateway resides.
How much throughput can I expect through Site -to -Site or Point-to -Site connections?
It's difficult to maintain the exact throughput of the VPN tunnels. IPsec and SSTP are crypto-heavy VPN
protocols. Throughput is also limited by the latency and bandwidth between your premises and the Internet. For a
VPN Gateway with only IKEv2 Point-to-Site VPN connections, the total throughput that you can expect depends
on the Gateway SKU. For more information on throughput, see Gateway SKUs.
Can I use any software VPN client for Point-to -Site that supports SSTP and/or IKEv2?
No. You can only use the native VPN client on Windows for SSTP, and the native VPN client on Mac for IKEv2.
Refer to the list of supported client operating systems.
Does Azure support IKEv2 VPN with Windows?
IKEv2 is supported on Windows 10 and Server 2016. However, in order to use IKEv2, you must install updates
and set a registry key value locally. OS versions prior to Windows 10 are not supported and can only use SSTP.
To prepare Windows 10 or Server 2016 for IKEv2:
1. Install the update.
Next steps
Once your connection is complete, you can add virtual machines to your virtual networks. For more information,
see Virtual Machines. To understand more about networking and virtual machines, see Azure and Linux VM
network overview.
For P2S troubleshooting information, Troubleshooting Azure point-to-site connections.