Vous êtes sur la page 1sur 231

Contents

Overview
About Azure networking
Architecture
Virtual Datacenters
Asymmetric routing with multiple network paths
Secure network designs
Hub-spoke topology
Network security best practices
Highly available network virtual appliances
Combine load balancing methods
Disaster recovery using Azure DNS and Traffic Manager
Plan and design
Virtual networks
Cross-premises connectivity - VPN
Cross-premises connectivity - dedicated private
Concepts
Virtual networks
Network load balancing
Application load balancing
DNS
DNS-based traffic distribution
Connect on-premises - VPN
Connect on-premises - dedicated
Get started
Create your first virtual network
How to
Internet connectivity
Network load balance public servers
Application load balance public servers
Protect web applications
Distribute traffic across locations
Internal connectivity
Network load balance private servers
Application load balance private servers
Connect virtual networks (same location)
Connect virtual networks (different locations)
Cross-premises connectivity
Create a S2S VPN connection (IPsec/IKE)
Create a P2S VPN connection (SSTP with certificates)
Create a dedicated private connection (ExpressRoute)
Management
Network monitoring overview
Check resource usage against Azure limits
View network topology
Sample scripts
Azure CLI
Azure PowerShell
Tutorials
Load balance VMs
Connect a computer to a virtual network
Reference
Azure CLI
Azure PowerShell
.Net
Node.js
REST
Resources
Author templates
Azure Roadmap
Community templates
Networking blog
Pricing
Pricing calculator
Regional availability
Stack Overflow
Videos
Azure networking
5/21/2018 • 13 minutes to read • Edit Online

Azure provides a variety of networking capabilities that can be used together or separately. Click any of the
following key capabilities to learn more about them:
Connectivity between Azure resources: Connect Azure resources together in a secure, private virtual network in
the cloud.
Internet connectivity: Communicate to and from Azure resources over the Internet.
On-premises connectivity: Connect an on-premises network to Azure resources through a virtual private
network (VPN ) over the Internet, or through a dedicated connection to Azure.
Load balancing and traffic direction: Load balance traffic to servers in the same location and direct traffic to
servers in different locations.
Security: Filter network traffic between network subnets or individual virtual machines (VM ).
Routing: Use default routing or fully control routing between your Azure and on-premises resources.
Manageability: Monitor and manage your Azure networking resources.
Deployment and configuration tools: Use a web-based portal or cross-platform command-line tools to deploy
and configure network resources.

Connectivity between Azure resources


Azure resources such as Virtual Machines, Cloud Services, Virtual Machines Scale Sets, and Azure App Service
Environments can communicate privately with each other through an Azure Virtual Network (VNet). A VNet is a
logical isolation of the Azure cloud dedicated to your subscription. You can implement multiple VNets within each
Azure subscription and Azure region. Each VNet is isolated from other VNets. For each VNet you can:
Specify a custom private IP address space using public and private (RFC 1918) addresses. Azure assigns
resources connected to the VNet a private IP address from the address space you assign.
Segment the VNet into one or more subnets and allocate a portion of the VNet address space to each subnet.
Use Azure-provided name resolution or specify your own DNS server for use by resources connected to a
VNet.
To learn more about the Azure Virtual Network service, read the Virtual network overview article. You can connect
VNets to each other, enabling resources connected to either VNet to communicate with each other across VNets.
You can use either or both of the following options to connect VNets to each other:
Peering: Enables resources connected to different Azure VNets within the same Azure region to communicate
with each other. The bandwidth and latency across the VNets is the same as if the resources were connected to
the same VNet. To learn more about peering, read the Virtual network peering overview article.
VPN Gateway: Enables resources connected to different Azure VNets within different Azure regions to
communicate with each other. Traffic between VNets flows through an Azure VPN Gateway. Bandwidth
between VNets is limited to the bandwidth of the gateway. To learn more about connecting VNets with a VPN
Gateway, read the Configure a VNet-to-VNet connection across regions article.

Internet connectivity
All Azure resources connected to a VNet have outbound connectivity to the Internet by default. The private IP
address of the resource is source network address translated (SNAT) to a public IP address by the Azure
infrastructure. To learn more about outbound Internet connectivity, read the Understanding outbound connections
in Azure article.
To communicate inbound to Azure resources from the Internet, or to communicate outbound to the Internet
without SNAT, a resource must be assigned a public IP address. To learn more about public IP addresses, read the
Public IP addresses article.

On-premises connectivity
You can access resources in your VNet securely over either a VPN connection, or a direct private connection. To
send network traffic between your Azure virtual network and your on-premises network, you must create a virtual
network gateway. You configure settings for the gateway to create the type of connection that you want, either
VPN or ExpressRoute.
You can connect your on-premises network to a VNet using any combination of the following options:
Point-to-site (VPN over SSTP )
The following picture shows separate point to site connections between multiple computers and a VNet:

This connection is established between a single computer and a VNet. This connection type is great if you're just
getting started with Azure, or for developers, because it requires little or no changes to your existing network. It's
also convenient when you are connecting from a remote location such as a conference or home. Point-to-site
connections are often coupled with a site-to-site connection through the same virtual network gateway. The
connection uses the SSTP protocol to provide encrypted communication over the Internet between the computer
and the VNet. The latency for a point-to-site VPN is unpredictable, since the traffic traverses the Internet.
Site-to-site (IPsec/IKE VPN tunnel)

This connection is established between your on-premises VPN device and an Azure VPN Gateway. This
connection type enables any on-premises resource that you authorize to access the VNet. The connection is an
IPSec/IKE VPN that provides encrypted communication over the Internet between your on-premises device and
the Azure VPN gateway. You can connect multiple on-premises sites to the same VPN gateway. The on-premises
VPN device at each site must have an externally-facing public IP address that is not behind a NAT. The latency for a
site-to-site connection is unpredictable, since the traffic traverses the Internet.
ExpressRoute (dedicated private connection)

This type of connection is established between your network and Azure, through an ExpressRoute partner. This
connection is private. Traffic does not traverse the Internet. The latency for an ExpressRoute connection is
predictable, since traffic doesn't traverse the Internet. ExpressRoute can be combined with a site-to-site connection.
To learn more about all the previous connection options, read the Connection topology diagrams article.

Load balancing and traffic direction


Microsoft Azure provides multiple services for managing how network traffic is distributed and load balanced. You
can use any of the following capabilities separately or together:
DNS load balancing
The Azure Traffic Manager service provides global DNS load balancing. Traffic Manager responds to clients with
the IP address of a healthy endpoint, based on one of the following routing methods:
Geographic: Clients are directed to specific endpoints (Azure, external or nested) based on which geographic
location their DNS query originates from. This method enables scenarios where knowing a client's geographic
region, and routing them based on it, is important. Examples include complying with data sovereignty
mandates, localization of content & user experience, and measuring traffic from different regions.
Performance: The IP address returned to the client is the "closest" to the client. The 'closest' endpoint is not
necessarily closest as measured by geographic distance. Instead, this method determines the closest endpoint
by measuring network latency. Traffic Manager maintains an Internet latency table to track the round-trip time
between IP address ranges and each Azure datacenter.
Priority: Traffic is directed to the primary (highest-priority) endpoint. If the primary endpoint is not available,
Traffic Manager routes the traffic to the second endpoint. If both the primary and secondary endpoints are not
available, the traffic goes to the third, and so on. Availability of the endpoint is based on the configured status
(enabled or disabled) and the ongoing endpoint monitoring.
Weighted round-robin: For each request, Traffic Manager randomly chooses an available endpoint. The
probability of choosing an endpoint is based on the weights assigned to all available endpoints. Using the same
weight across all endpoints results in an even traffic distribution. Using higher or lower weights on specific
endpoints causes those endpoints to be returned more or less frequently in the DNS responses.
The following picture shows a request for a web application directed to a Web App endpoint. Endpoints can also
be other Azure services such as VMs and Cloud Services.

The client connects directly to that endpoint. Azure Traffic Manager detects when an endpoint is unhealthy and
then redirects clients to a different, healthy endpoint. To learn more about Traffic Manager, read the Azure Traffic
Manager overview article.
Application load balancing
The Azure Application Gateway service provides application delivery controller (ADC ) as a service. Application
Gateway offers various Layer 7 (HTTP/HTTPS ) load-balancing capabilities for your applications, including a web
application firewall to protect your web applications from vulnerabilities and exploits. Application Gateway also
allows you to optimize web farm productivity by offloading CPU -intensive SSL termination to the application
gateway.
Other Layer 7 routing capabilities include round-robin distribution of incoming traffic, cookie-based session
affinity, URL path-based routing, and the ability to host multiple websites behind a single application gateway.
Application Gateway can be configured as an Internet-facing gateway, an internal-only gateway, or a combination
of both. Application Gateway is fully Azure managed, scalable, and highly available. It provides a rich set of
diagnostics and logging capabilities for better manageability. To learn more about Application Gateway, read the
Application Gateway overview article.
The following picture shows URL path-based routing with Application Gateway:
Network load balancing
The Azure Load Balancer provides high-performance, low -latency Layer 4 load-balancing for all UDP and TCP
protocols. It manages inbound and outbound connections. You can configure public and internal load-balanced
endpoints. You can define rules to map inbound connections to back-end pool destinations by using TCP and HTTP
health-probing options to manage service availability. To learn more about Load Balancer, read the Load Balancer
overview article.
The following picture shows an Internet-facing multi-tier application that utilizes both external and internal load
balancers:
Security
You can filter traffic to and from Azure resources using the following options:
Network: You can implement Azure network security groups (NSGs) to filter inbound and outbound traffic to
Azure resources. Each NSG contains one or more inbound and outbound rules. Each rule specifies the source
IP addresses, destination IP addresses, port, and protocol that traffic is filtered with. NSGs can be applied to
individual subnets and individual VMs. To learn more about NSGs, read the Network security groups overview
article.
Application: By using an Application Gateway with web application firewall you can protect your web
applications from vulnerabilities and exploits. Common examples are SQL injection attacks, cross site scripting,
and malformed headers. Application gateway filters out this traffic and stops it from reaching your web servers.
You are able to configure what rules you want enabled. The ability to configure SSL negotiation policies is
provided to allow certain policies to be disabled. To learn more about the web application firewall, read the Web
application firewall article.
If you need network capability Azure doesn't provide, or want to use network applications you use on-premises,
you can implement the products in VMs and connect them to your VNet. The Azure Marketplace contains several
different VMs pre-configured with network applications you may currently use. These pre-configured VMs are
typically referred to as network virtual appliances (NVA). NVAs are available with applications such as firewall and
WAN optimization.

Routing
Azure creates default route tables that enable resources connected to any subnet in any VNet to communicate with
each other. You can implement either or both of the following types of routes to override the default routes Azure
creates:
User-defined: You can create custom route tables with routes that control where traffic is routed to for each
subnet. To learn more about user-defined routes, read the User-defined routes article.
Border gateway protocol (BGP ): If you connect your VNet to your on-premises network using an Azure
VPN Gateway or ExpressRoute connection, you can propagate BGP routes to your VNets. BGP is the standard
routing protocol commonly used in the Internet to exchange routing and reachability information between two
or more networks. When used in the context of Azure Virtual Networks, BGP enables the Azure VPN Gateways
and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that inform both
gateways on the availability and reachability for those prefixes to go through the gateways or routers involved.
BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns
from one BGP peer to all other BGP peers. To learn more about BGP, see the BGP with Azure VPN Gateways
overview article.

Manageability
Azure provides the following tools to monitor and manage networking:
Activity logs: All Azure resources have activity logs which provide information about operations taken place,
status of operations and who initiated the operation. To learn more about activity logs, read the Activity logs
overview article.
Diagnostic logs: Periodic and spontaneous events are created by network resources and logged in Azure
storage accounts, sent to an Azure Event Hub, or sent to Azure Log Analytics. Diagnostic logs provide insight to
the health of a resource. Diagnostic logs are provided for Load Balancer (Internet-facing), Network Security
Groups, routes, and Application Gateway. To learn more about diagnostic logs, read the Diagnostic logs
overview article.
Metrics: Metrics are performance measurements and counters collected over a period of time on resources.
Metrics can be used to trigger alerts based on thresholds. Currently metrics are available on Application
Gateway. To learn more about metrics, read the Metrics overview article.
Troubleshooting: Troubleshooting information is accessible directly in the Azure portal. The information helps
diagnose common problems with ExpressRoute, VPN Gateway, Application Gateway, Network Security Logs,
Routes, DNS, Load Balancer, and Traffic Manager.
Role-based access control (RBAC ): Control who can create and manage networking resources with role-
based access control (RBAC ). Learn more about RBAC by reading the Get started with RBAC article.
Packet capture: The Azure Network Watcher service provides the ability to run a packet capture on a VM
through an extension within the VM. This capability is available for Linux and Windows VMs. To learn more
about packet capture, read the Packet capture overview article.
Verify IP flows: Network Watcher allows you to verify IP flows between an Azure VM and a remote resource
to determine whether packets are allowed or denied. This capability provides administrators the ability to
quickly diagnose connectivity issues. To learn more about how to verify IP flows, read the IP flow verify
overview article.
Troubleshoot VPN connectivity: The VPN troubleshooter capability of Network Watcher provides the ability
to query a connection or gateway and verify the health of the resources. To learn more about troubleshooting
VPN connections, read the VPN connectivity troubleshooting overview article.
View network topology: View a graphical representation of the network resources in a VNet with Network
Watcher. To learn more about viewing network topology, read the Topology overview article.

Deployment and configuration tools


You can deploy and configure Azure networking resources with any of the following tools:
Azure portal: A graphical user interface that runs in a browser. Open the Azure portal.
Azure PowerShell: Command-line tools for managing Azure from Windows computers. Learn more about
Azure PowerShell by reading the Azure PowerShell overview article.
Azure command-line interface (CLI ): Command-line tools for managing Azure from Linux, macOS, or
Windows computers. Learn more about the Azure CLI by reading the Azure CLI overview article.
Azure Resource Manager templates: A file (in JSON format) that defines the infrastructure and
configuration of an Azure solution. By using a template, you can repeatedly deploy your solution throughout its
lifecycle and have confidence your resources are deployed in a consistent state. To learn more about authoring
templates, read the Best practices for creating templates article. Templates can be deployed with the Azure
portal, CLI, or PowerShell. To get started with templates right away, deploy one of the many pre-configured
templates in the Azure Quickstart Templates library.

Pricing
Some of the Azure networking services have a charge, while others are free. View the Virtual network, VPN
Gateway, Application Gateway, Load Balancer, Network Watcher, DNS, Traffic Manager and ExpressRoute pricing
pages for more information.

Next steps
Create your first VNet, and connect a few VMs to it, by completing the steps in the Create your first virtual
network article.
Connect your computer to a VNet by completing the steps in the Configure a point-to-site connection article.
Load balance Internet traffic to public servers by completing the steps in the Create an Internet-facing load
balancer article.
Microsoft Azure Virtual Datacenter: A Network
Perspective
5/8/2018 • 34 minutes to read • Edit Online

Microsoft Azure: Move faster, Save money, Integrate on-premises apps and data

Overview
Migrating on-premises applications to Azure, even without any significant changes (an approach known as “lift and
shift”), provides organizations the benefits of a secured and cost-efficient infrastructure. However, to make the most
of the agility possible with cloud computing, enterprises should evolve their architectures to take advantage of
Azure capabilities. Microsoft Azure delivers hyper-scale services and infrastructure, enterprise-grade capabilities
and reliability, and many choices for hybrid connectivity. Customers can choose to access these cloud services
either via the Internet or with Azure ExpressRoute, which provides private network connectivity. The Microsoft
Azure platform allows customers to seamlessly extend their infrastructure into the cloud and build multi-tier
architectures. Additionally, Microsoft partners provide enhanced capabilities by offering security services and
virtual appliances that are optimized to run in Azure.
This article provides an overview of patterns and designs that can be used to solve the architectural scale,
performance, and security concerns many customers face when thinking about moving en masse to the cloud. An
overview of how to fit different organizational IT roles into the management and governance of the system is also
discussed, with emphasis to security requirements and cost optimization.

What is a Virtual Data Center?


In the early days, cloud solutions were designed to host single, relatively isolated, applications, in the public
spectrum. This approach worked well for a few years. However, as the benefits of cloud solutions became apparent
and multiple large-scale workloads were hosted on the cloud, addressing security, reliability, performance, and cost
concerns of deployments in one or more regions became vital throughout the life cycle of the cloud service.
The following cloud deployment diagram illustrates some examples of security gaps (red box) and room for
optimization network virtual appliances across workloads (yellow box).
The Virtual Data Center (vDC ) was born from this necessity for scaling to support enterprise workloads, and the
need to deal with the problems introduced when supporting large-scale applications in the public cloud.
A vDC is not just the application workloads in the cloud, but also the network, security, management, and
infrastructure (for example, DNS and Directory Services). It usually also provides a private connection back to an
on-premises network or data center. As more and more workloads move to Azure, it is important to think about the
supporting infrastructure and objects that these workloads are placed in. Thinking carefully about how resources
are structured can avoid the proliferation of hundreds of "workload islands" that must be managed separately with
independent data flow, security models, and compliance challenges.
A Virtual Data Center is essentially a collection of separate but related entities with common supporting functions,
features, and infrastructure. By viewing your workloads as an integrated vDC, you can realize reduced cost due to
economies of scale, optimized security through component and data flow centralization, along with easier
operations, management, and compliance audits.

NOTE
It's important to understand that the vDC is NOT a discrete Azure product, but the combination of various features and
capabilities to meet your exact requirements. vDC is a way of thinking about your workloads and Azure usage to maximize
your resources and abilities in the cloud. The virtual DC is therefore a modular approach on how to build up IT services in the
Azure, respecting organizational roles and responsibilities.

The vDC can help enterprises get workloads and applications into Azure for the following scenarios:
Hosting multiple related workloads
Migrating workloads from an on-premises environment to Azure
Implementing shared or centralized security and access requirements across workloads
Mixing DevOps and Centralized IT appropriately for a large enterprise
The key to unlock the advantages of vDC, is a centralized topology (hub and spokes) with a mix of Azure features:
Azure VNet, NSGs, VNet Peering, User-Defined Routes (UDR ), and Azure Identity with Role Base Access Control
(RBAC ).
Who Needs a Virtual Data Center?
Any Azure customer that needs to move more than a couple of workloads into Azure can benefit from thinking
about using common resources. Depending on the magnitude, even single applications can benefit from using the
patterns and components used to build a vDC.
If your organization has a centralized IT, Network, Security, and/or Compliance team/department, a vDC can help
enforce policy points, segregation of duty, and ensure uniformity of the underlying common components while
giving application teams as much freedom and control as is appropriate for your requirements.
Organizations that are looking to DevOps can utilize the vDC concepts to provide authorized pockets of Azure
resources and ensure they have total control within that group (either subscription or resource group in a common
subscription), but the network and security boundaries stay compliant as defined by a centralized policy in a hub
VNet and Resource Group.

Considerations on Implementing a Virtual Data Center


When designing a vDC, there are several pivotal issues to consider:
Identity and Directory Services
Security infrastructure
Connectivity to the cloud
Connectivity within the cloud
I d e n ti ty a n d D i r e cto r y Se r v i ce

Identity and Directory services are a key aspect of all data centers, both on-premises and in the cloud. Identity is
related to all aspects of access and authorization to services within the vDC. To help ensure that only authorized
users and processes access your Azure Account and resources, Azure uses several types of credentials for
authentication. These include passwords (to access the Azure account), cryptographic keys, digital signatures, and
certificates. Azure Multi-Factor Authentication (MFA) is an additional layer of security for accessing Azure services.
Azure MFA provides strong authentication with a range of easy verification options—phone call, text message, or
mobile app notification—and allow customers to choose the method they prefer.
Any large enterprise needs to define an identity management process that describes the management of individual
identities, their authentication, authorization, roles, and privileges within or across the vDC. The goals of this
process should be to increase security and productivity while decreasing cost, downtime, and repetitive manual
tasks.
Enterprise/organizations can require a demanding mix of services for different Line-of-Businesses (LOBs), and
employees often have different roles when involved with different projects. A vDC requires good cooperation
between different teams, each with specific role definitions, to get systems running with good governance. The
matrix of responsibilities, access, and rights can be extremely complex. Identity management in vDC is
implemented through Azure Active Directory (AAD ) and Role-Based Access Control (RBAC ).
A Directory Service is a shared information infrastructure for locating, managing, administering, and organizing
everyday items and network resources. These resources can include volumes, folders, files, printers, users, groups,
devices, and other objects. Each resource on the network is considered an object by the directory server.
Information about a resource is stored as a collection of attributes associated with that resource or object.
All Microsoft online business services rely on Azure Active Directory (AAD ) for sign-in and other identity needs.
Azure Active Directory is a comprehensive, highly available identity and access management cloud solution that
combines core directory services, advanced identity governance, and application access management. AAD can be
integrated with on-premises Active Directory to enable single sign-on for all cloud-based and locally hosted (on-
premises) applications. The user attributes of on-premises Active Directory can be automatically synchronized to
AAD.
A single global administrator is not required to assign all permissions in a vDC. Instead each specific department
(or group of users or services in the Directory Service) can have the permissions required to manage their own
resources within a vDC. Structuring permissions requires balancing. Too many permissions can impede
performance efficiency, and too few or loose permissions can increase security risks. Azure Role-Based Access
Control (RBAC ) helps to address this problem, by offering fine-grained access management for vDC resources.
Se cu r i ty I n fr a s tr u ctu r e

Security infrastructure, in the context of a vDC, is mainly related to the segregation of traffic in the vDC's specific
virtual network segment, and how to control ingress and egress flows throughout the vDC. Azure is based on
multi-tenant architecture that prevents unauthorized and unintentional traffic between deployments, using Virtual
Network (VNet) isolation, access control lists (ACLs), load balancers, and IP filters, along with traffic flow policies.
Network address translation (NAT) separates internal network traffic from external traffic.
The Azure fabric allocates infrastructure resources to tenant workloads and manages communications to and from
virtual machines (VMs). The Azure hypervisor enforces memory and process separation between VMs and
securely routes network traffic to guest OS tenants.
Co n n e cti v i ty to th e cl o u d

The vDC needs connectivity with external networks to offer services to customers, partners and/or internal users.
This usually means connectivity not only to the Internet, but also to on-premises networks and data centers.
Customers can build their security policies to control what and how specific vDC hosted services are accessible
to/from the Internet using Network Virtual Appliances (with filtering and traffic inspection), and custom routing
policies and network filtering (User Defined Routing and Network Security Groups).
Enterprises often need to connect vDCs to on-premises data centers or other resources. The connectivity between
Azure and on-premises networks is therefore a crucial aspect when designing an effective architecture. Enterprises
have two different ways to create an interconnection between vDC and on-premises in Azure: transit over the
Internet and/or by private direct connections.
An Azure Site-to-Site VPN is an interconnection service over the Internet between on-premises networks and
the vDC, established through secure encrypted connections (IPsec/IKE tunnels). Azure Site-to-Site connection is
flexible, quick to create, and does not require any further procurement, as all connections connect over the internet.
ExpressRoute is an Azure connectivity service that lets you create private connections between vDC and the on-
premises networks. ExpressRoute connections do not go over the public Internet, and offer higher security,
reliability, and higher speeds (up to 10 Gbps) along with consistent latency. ExpressRoute is very useful for vDCs,
as ExpressRoute customers can get the benefits of compliance rules associated with private connections.
Deploying ExpressRoute connections involves engaging with an ExpressRoute service provider. For customers that
need to start quickly, it is common to initially use Site-to-Site VPN to establish connectivity between the vDC and
on-premises resources, and then migrate to ExpressRoute connection.
Co n n e cti v i ty w i th i n th e cl o u d

VNets and VNet Peering are the basic networking connectivity services inside a vDC. A VNet guarantees a natural
boundary of isolation for vDC resources, and VNet peering allows intercommunication between different VNets
within the same Azure region or even across regions. Traffic control inside a VNet and between VNets need to
match a set of security rules specified through Access Control Lists (Network Security Group), Network Virtual
Appliances, and custom routing tables (UDR ).

Virtual Data Center Overview


Topology
Hub and spokes model extended the Virtual Data Center within a single Azure region
The hub is the central zone that controls and inspects ingress and/or egress traffic between different zones:
Internet, on-premises, and the spokes. The hub and spoke topology gives the IT department an effective way to
enforce security policies in a central location, while reducing the potential for misconfiguration and exposure.
The hub contains the common service components consumed by the spokes. Here are a few typical examples of
common central services:
The Windows Active Directory infrastructure (with the related ADFS service) required for user authentication of
third parties accessing from untrusted networks before getting access to the workloads in the spoke
A DNS service to resolve naming for the workload in the spokes, to access resources on-premises and on the
Internet
A PKI infrastructure, to implement single sign-on on workloads
Flow control (TCP/UDP ) between the spokes and Internet
Flow control between the spoke and on-premises
If desired, flow control between one spoke and another
The vDC reduces overall cost by using the shared hub infrastructure between multiple spokes.
The role of each spoke can be to host different types of workloads. The spokes can also provide a modular
approach for repeatable deployments (for example, dev and test, User Acceptance Testing, pre-production, and
production) of the same workloads. The spokes can also be used to segregate and enable different groups within
your organization (for example, DevOps groups). Inside a spoke, it is possible to deploy a basic workload or
complex multi-tier workloads with traffic control between the tiers.
Su b sc r i p t i o n l i m i t s a n d m u l t i p l e h u b s

In Azure, every component, whatever the type, is deployed in an Azure Subscription. The isolation of Azure
components in different Azure subscriptions can satisfy the requirements of different LOBs, such as setting up
differentiated levels of access and authorization.
A single vDC can scale up to large number of spokes, although, as with every IT system, there are platforms limits.
The hub deployment is bound to a specific Azure subscription, which has restrictions and limits (for example, a max
number of VNet peerings - see Azure subscription and service limits, quotas, and constraints for details). In cases
where limits may be an issue, the architecture can scale up further by extending the model from a single hub-
spokes to a cluster of hub and spokes. Multiple hubs in one or more Azure regions can be interconnected using
VNet Peering, ExpressRoute, or site-to-site VPN.

The introduction of multiple hubs increases the cost and management effort of the system and would only be
justified by scalability (examples: system limits or redundancy) and regional replication (examples: end-user
performance or disaster recovery). In scenarios requiring multiple hubs, all the hubs should strive to offer the same
set of services for operational ease.
I n t e r c o n n e c t i o n b e t w e e n sp o k e s

Inside a single spoke, it is possible to implement complex multi-tiers workloads. Multi-tier configurations can be
implemented using subnets (one for every tier) in the same VNet and filtering the flows using NSGs.
On the other hand, an architect may want to deploy a multi-tier workload across multiple VNets. Using VNet
peering, spokes can connect to other spokes in the same hub or different hubs. A typical example of this scenario is
the case where application processing servers are in one spoke (VNet), while the database is deployed in a different
spoke (VNet). In this case, it is easy to interconnect the spokes with VNet peering and thereby avoid transiting
through the hub. A careful architecture and security review should be performed to ensure that bypassing the hub
doesn’t bypass important security or auditing points that may only exist in the hub.
Spokes can also be interconnected to a spoke that acts as a hub. This approach creates a two-level hierarchy: the
spoke in the higher level (level 0) become the hub of lower spokes (level 1) of the hierarchy. The spokes of vDC
need to forward the traffic to the central hub to reach out either to the on-premises network or internet. An
architecture with two levels of hub introduces complex routing that removes the benefits of a simple hub-spoke
relationship.
Although Azure allows complex topologies, one of the core principles of the vDC concept is repeatability and
simplicity. To minimize management effort, the simple hub-spoke design is the recommended vDC reference
architecture.
Components
A Virtual Data Center is made up of four basic component types: Infrastructure, Perimeter Networks,
Workloads, and Monitoring.
Each component type consists of various Azure features and resources. Your vDC is made up of instances of
multiple components types and multiple variations of the same component type. For instance, you may have many
different, logically separated, workload instances that represent different applications. You use these different
component types and instances to ultimately build the vDC.

The preceding high-level architecture of a vDC shows different component types used in different zones of the
hub-spokes topology. The diagram shows infrastructure components in various parts of the architecture.
As a good practice (for an on-premises DC or vDC ) access rights and privileges should be group-based. Dealing
with groups, instead of individual users helps maintaining access policies consistently across teams and aids in
minimizing configuration errors. Assigning and removing users to and from appropriate groups helps keeping the
privileges of a specific user up-to-date.
Each role group should have a unique prefix on their names making it easy to identify which group is associated
with which workload. For instance, a workload hosting an authentication service might have groups named
AuthServiceNetOps, AuthServiceSecOps, AuthServiceDevOps, and AuthServiceInfraOps. Likewise for centralized
roles, or roles not related to a specific service, could be prefaced with “Corp”, CorpNetOps for example.
Many organizations use a variation of the following groups to provide a major breakdown of roles:
The central IT group (Corp ) has the ownership rights to control infrastructure (such as networking and security)
components, and therefore needs to have the role of contributor on the subscription (and have control of the
hub) and network contributor rights in the spokes. Large organization frequently split up these management
responsibilities between multiple teams such as; a Network Operations (CorpNetOps) group (with exclusive
focus on networking) and a Security Operations (CorpSecOps) group (responsible for firewall and security
policy). In this specific case, two different groups need to be created for assignment of these custom roles.
The dev & test (AppDevOps) group has the responsibility to deploy workloads (Apps or Services). This group
takes the role of Virtual Machine Contributor for IaaS deployments and/or one or more PaaS contributor’s
roles (see Built-in roles for Azure Role-Based Access Control). Optionally the dev & test team may need to have
visibility on security policies (NSGs) and routing policies (UDR ) inside the hub or a specific spoke. Therefore, in
addition to the roles of contributor for workloads, this group would also need the role of Network Reader.
The operation and maintenance group (CorpInfraOps or AppInfraOps) have the responsibility of managing
workloads in production. This group needs to be a subscription contributor on workloads in any production
subscriptions. Some organizations might also evaluate if they need an additional escalation support team group
with the role of subscription contributor in production and in the central hub subscription, in order to fix
potential configuration issues in the production environment.
A vDC is structured so that groups created for the central IT groups managing the hub have corresponding groups
at the workload level. In addition to managing hub resources only the central IT groups would be able to control
external access and top-level permissions on the subscription. However, workload groups would be able to control
resources and permissions of their VNet independently on Central IT.
The vDC needs to be partitioned to securely host multiple projects across different Line-of-Businesses (LOBs). All
projects require different isolated environments (Dev, UAT, production). Separate Azure subscriptions for each of
these environments provide natural isolation.

The preceding diagram shows the relationship between an organization's projects, users, groups, and the
environments where the Azure components are deployed.
Typically in IT, an environment (or tier) is a system in which multiple applications are deployed and executed. Large
enterprises use a development environment (where changes originally made and tested) and a production
environment (what end-users use). Those environments are separated, often with several staging environments in
between them to allow phased deployment (rollout), testing, and rollback in case of problems. Deployment
architectures vary significantly, but usually the basic process of starting at development (DEV ) and ending at
production (PROD ) is still followed.
A common architecture for these types of multi-tier environments consists of DevOps (development and testing),
UAT (staging), and production environments. Organizations can leverage single or multiple Azure AD tenants to
define access and rights to these environments. The previous diagram shows a case where two different Azure AD
tenants are used: one for DevOps and UAT, and the other exclusively for production.
The presence of different Azure AD tenants enforces the separation between environments. The same group of
users (as an example, Central IT) needs to authenticate using a different URI to access a different AD tenant modify
the roles or permissions of either the DevOps or production environments of a project. The presence of different
user authentication to access different environments reduces possible outages and other issues caused by human
errors.
Component Type: Infrastructure
This component type is where most of the supporting infrastructure resides. It's also where your centralized IT,
Security, and/or Compliance teams spend most of their time.

Infrastructure components provide an interconnection between the different components of a vDC, and are present
in both the hub and the spokes. The responsibility for managing and maintaining the infrastructure components is
typically assigned to the central IT and/or security team.
One of the primary tasks of the IT infrastructure team is to guarantee the consistency of IP address schemas across
the enterprise. The private IP address space assigned to the vDC needs to be consistent and NOT overlapping with
private IP addresses assigned on your on-premises networks.
While NAT on the on-premises edge routers or in Azure environments can avoid IP address conflicts, it adds
complications to your infrastructure components. Simplicity of management is one of the key goals of vDC, so
using NAT to handle IP concerns is not a recommended solution.
Infrastructure components contain the following functionality:
Identity and directory services. Access to every resource type in Azure is controlled by an identity stored in a
directory service. The directory service stores not only the list of users, but also the access rights to resources in
a specific Azure subscription. These services can exist cloud-only, or they can be synchronized with on-premises
identity stored in Active Directory.
Virtual Network. Virtual Networks are one of main components of a vDC, and enable you to create a traffic
isolation boundary on the Azure platform. A Virtual Network is composed of a single or multiple virtual
network segments, each with a specific IP network prefix (a subnet). The Virtual Network defines an internal
perimeter area where IaaS virtual machines and PaaS services can establish private communications. VMs (and
PaaS services) in one virtual network cannot communicate directly to VMs (and PaaS services) in a different
virtual network, even if both virtual networks are created by the same customer, under the same subscription.
Isolation is a critical property that ensures customer VMs and communication remains private within a virtual
network.
UDR. Traffic in a Virtual Network is routed by default based on the system routing table. A User Define Route is
a custom routing table that network administrators can associate to one or more subnets to overwrite the
behavior of the system routing table and define a communication path within a virtual network. The presence of
UDRs guarantees that egress traffic from the spoke transit through specific custom VMs and/or Network
Virtual Appliances and load balancers present in the hub and in the spokes.
NSG. A Network Security Group is a list of security rules that act as traffic filtering on IP Sources, IP
Destination, Protocols, IP Source Ports, and IP Destination ports. The NSG can be applied to a subnet, a Virtual
NIC card associated with an Azure VM, or both. The NSGs are essential to implement a correct flow control in
the hub and in the spokes. The level of security afforded by the NSG is a function of which ports you open, and
for what purpose. Customers should apply additional per-VM filters with host-based firewalls such as IPtables
or the Windows Firewall.
DNS. The name resolution of resources in the VNets of a vDC is provided through DNS. Azure provides DNS
services for both Public and Private name resolution. Private zones provide name resolution both within a
virtual network and across virtual networks. You can have private zones not only span across virtual networks in
the same region, but also across regions and subscriptions. For public resolution, Azure DNS provides a hosting
service for DNS domains, providing name resolution using Microsoft Azure infrastructure. By hosting your
domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and billing as
your other Azure services.
Subscription and Resource Group Management. A subscription defines a natural boundary to create
multiple groups of resources in Azure. Resources in a subscription are assembled together in logical containers
named Resource Groups. The Resource Group represents a logical group to organize the resources of a vDC.
RBAC. Through RBAC, it is possible to map organizational role along with rights to access specific Azure
resources, allowing you to restrict users to only a certain subset of actions. With RBAC, you can grant access by
assigning the appropriate role to users, groups, and applications within the relevant scope. The scope of a role
assignment can be an Azure subscription, a resource group, or a single resource. RBAC allows inheritance of
permissions. A role assigned at a parent scope also grants access to the children contained within it. Using
RBAC, you can segregate duties and grant only the amount of access to users that they need to perform their
jobs. For example, use RBAC to let one employee manage virtual machines in a subscription, while another can
manage SQL DBs within the same subscription.
VNet Peering. The fundamental feature used to create the infrastructure of a vDC is VNet Peering, a
mechanism that connects two virtual networks (VNets) in the same region through the Azure data center
network, or using the Azure world-wide backbone across regions.
Component Type: Perimeter Networks
Perimeter network components (also known as a DMZ network) allow you to provide network connectivity with
your on-premises or physical data center networks, along with any connectivity to and from the Internet. It's also
where your network and security teams likely spend most of their time.
Incoming packets should flow through the security appliances in the hub, such as the firewall, IDS, and IPS, before
reaching the back-end servers in the spokes. Internet-bound packets from the workloads should also flow through
the security appliances in the perimeter network for policy enforcement, inspection, and auditing purposes, before
leaving the network.
Perimeter network components provide the following features:
Virtual Networks, UDR, NSG
Network Virtual Appliance
Load Balancer
Application Gateway / WAF
Public IPs
Usually, the central IT and security teams have responsibility for requirement definition and operations of the
perimeter networks.

The preceding diagram shows the enforcement of two perimeters with access to the internet and an on-premises
network, both resident in the hub. In a single hub, the perimeter network to internet can scale up to support large
numbers of LOBs, using multiple farms of Web Application Firewalls (WAFs) and/or firewalls.
Virtual Networks The hub is typically built on a VNet with multiple subnets to host the different type of services
filtering and inspecting traffic to or from the internet via NVAs, WAFs, and Azure Application Gateways.
UDR Using UDR, customers can deploy firewalls, IDS/IPS, and other virtual appliances, and route network traffic
through these security appliances for security boundary policy enforcement, auditing, and inspection. UDRs can be
created in both the hub and the spokes to guarantee that traffic transits through the specific custom VMs, Network
Virtual Appliances and load balancers used by the vDC. To guarantee that traffic generated from VMs resident in
the spoke transit to the correct virtual appliances, a UDR needs to be set in the subnets of the spoke by setting the
front-end IP address of the internal load balancer as the next-hop. The internal load balancer distributes the internal
traffic to the virtual appliances (load balancer back-end pool).
Network Virtual Appliances In the hub, the perimeter network with access to the internet is normally managed
through a farm of firewalls and/or Web Application Firewalls (WAFs).
Different LOBs commonly use many web applications, and these applications tend to suffer from various
vulnerabilities and potential exploits. Web Applications Firewalls are a special breed of product used to detect
attacks against web applications (HTTP/HTTPS ) in more depth than a generic firewall. Compared with tradition
firewall technology, WAFs have a set of specific features to protect internal web servers from threats.
A firewall farm is group of firewalls working in tandem under the same common administration, with a set of
security rules to protect the workloads hosted in the spokes, and control access to on-premises networks. A firewall
farm has less specialized software compared with a WAF, but has a broad application scope to filter and inspect any
type of traffic in egress and ingress. Firewall farms are normally implemented in Azure through Network Virtual
Appliances (NVAs), which are available in the Azure marketplace.
It is recommended to use one set of NVAs for traffic originating on the Internet, and another for traffic originating
on-premises. Using only one set of NVAs for both is a security risk, as it provides no security perimeter between
the two sets of network traffic. Using separate NVAs reduces the complexity of checking security rules, and makes
it clear which rules correspond to which incoming network request.
Most large enterprises manage multiple domains. Azure DNS can be used to host the DNS records for a particular
domain. As example, the Virtual IP Address (VIP ) of the Azure external load balancer (or the WAFs) can be
registered in the A record of an Azure DNS record.
Azure Load Balancer Azure load balancer offers a high availability Layer 4 (TCP, UDP ) service, which can
distribute incoming traffic among service instances defined in a load-balanced set. Traffic sent to the load balancer
from front-end endpoints (public IP endpoints or private IP endpoints) can be redistributed with or without address
translation to a set of back-end IP address pool (examples being; Network Virtual Appliances or VMs).
Azure Load Balancer can probe the health of the various server instances as well, and when a probe fails to
respond the load balancer stops sending traffic to the unhealthy instance. In a vDC, we have the presence of an
external load balancer in the hub (for instance, balance the traffic to NVAs), and in the spokes (to perform tasks like
balancing traffic between different VMs of a multitier application).
Application Gateway Microsoft Azure Application Gateway is a dedicated virtual appliance providing application
delivery controller (ADC ) as a service, offering various layer 7 load balancing capabilities for your application. It
allows you to optimize web farm productivity by offloading CPU intensive SSL termination to the application
gateway. It also provides other layer 7 routing capabilities including round robin distribution of incoming traffic,
cookie-based session affinity, URL path-based routing, and the ability to host multiple websites behind a single
Application Gateway. A web application firewall (WAF ) is also provided as part of the application gateway WAF
SKU. This SKU provides protection to web applications from common web vulnerabilities and exploits. Application
Gateway can be configured as internet facing gateway, internal only gateway, or a combination of both.
Public IPs Some Azure features enable you to associate service endpoints to a public IP address that allows to
your resource to be accessed from the internet. This endpoint uses Network Address Translation (NAT) to route
traffic to the internal address and port on the Azure virtual network. This path is the primary way for external
traffic to pass into the virtual network. The Public IP addresses can be configured to determine which traffic is
passed in, and how and where it's translated on to the virtual network.
Component Type: Monitoring
Monitoring components provide visibility and alerting from all the other components types. All teams should have
access to monitoring for the components and services they have access to. If you have a centralized help desk or
operations teams, they would need to have integrated access to the data provided by these components.
Azure offers different types of logging and monitoring services to track the behavior of Azure hosted resources.
Governance and control of workloads in Azure is based not just on collecting log data, but also the ability to trigger
actions based on specific reported events.
Azure Monitor - Azure includes multiple services that individually perform a specific role or task in the
monitoring space. Together, these services deliver a comprehensive solution for collecting, analyzing, and acting on
telemetry from your application and the Azure resources that support them. They can also work to monitor critical
on-premises resources in order to provide a hybrid monitoring environment. Understanding the tools and data
that are available is the first step in developing a complete monitoring strategy for your application.
There are two major types of logs in Azure:
Activity Logs (referred also as "Operational Log") provide insight into the operations that were performed
on resources in the Azure subscription. These logs report the control-plane events for your subscriptions.
Every Azure resource produces audit logs.
Azure Diagnostic Logs are logs generated by a resource that provide rich, frequent data about the
operation of that resource. The content of these logs varies by resource type.

In a vDC, it is extremely important to track the NSGs logs, particularly this information:
Event logs: provides information on what NSG rules are applied to VMs and instance roles based on MAC
address.
Counter logs: tracks how many times each NSG rule was executed to deny or allow traffic.
All logs can be stored in Azure Storage Accounts for audit, static analysis, or backup purposes. When the logs are
stored in an Azure storage account, customers can use different types of frameworks to retrieve, prep, analyze, and
visualize this data to report the status and health of cloud resources.
Large enterprises should already have acquired a standard framework for monitoring on-premises systems and
can extend that framework to integrate logs generated by cloud deployments. For organizations that wish to keep
all the logging in the cloud, [Log Analytics][LogAnalytics] is a great choice. Since Log Analytics is implemented as a
cloud-based service, you can have it up and running quickly with minimal investment in infrastructure services.
Log Analytics can also integrate with System Center components such as System Center Operations Manager to
extend your existing management investments into the cloud.
Log Analytics is a service in Azure that helps collect, correlate, search, and act on log and performance data
generated by operating systems, applications, and infrastructure cloud components. It gives customers real-time
operational insights using integrated search and custom dashboards to analyze all the records across all your
workloads in a vDC.
The Network Performance Monitor (NPM ) solution inside OMS can provide detailed network information end-to-
end, including a single view of your Azure networks and on-premises networks. With specific monitors for
ExpressRoute and public services.
Component Type: Workloads
Workload components are where your actual applications and services reside. It's also where your application
development teams spend most of their time.
The workload possibilities are truly endless. The following are just a few of the possible workload types:
Internal LOB Applications
Line-of-business applications are computer applications critical to the ongoing operation of an enterprise. LOB
applications have some common characteristics:
Interactive. LOB applications are interactive by nature: data is entered, and result/reports are returned.
Data driven. LOB applications are data intensive with frequent access to the databases or other storage.
Integrated. LOB applications offer integration with other systems within or outside the organization.
Customer facing web sites (Internet or Internal facing) Most applications that interact with the Internet are
web sites. Azure offers the capability to run a web site on an IaaS VM or from an Azure Web Apps site (PaaS ).
Azure Web Apps support integration with VNets that allow the deployment of the Web Apps in the spoke of a
vDC. When looking at internal facing web sites, with the VNET integration, you don't need to expose an Internet
endpoint for your applications but can use the resources via private non-internet routable addresses from your
private VNet instead.
Big Data/Analytics When data needs to scale up to a very large volume, databases may not scale up properly.
Hadoop technology offers a system to run distributed queries in parallel on large number of nodes. Customers
have the option to run data workloads in IaaS VMs or PaaS (HDInsight). HDInsight supports deploying into a
location-based VNet, can be deployed to a cluster in a spoke of the vDC.
Events and Messaging Azure Event Hubs is a hyper-scale telemetry ingestion service that collects, transforms,
and stores millions of events. As a distributed streaming platform, it offers low latency and configurable time
retention, enabling you to ingest massive amounts of telemetry into Azure and read that data from multiple
applications. With Event Hubs, a single stream can support both real-time and batch-based pipelines.
A highly reliable cloud messaging service between applications and services, can be implemented through Azure
Service Bus that offers asynchronous brokered messaging between client and server, along with structured first-in-
first-out (FIFO ) messaging and publish/subscribe capabilities.
Multiple vDC
So far, this article has focused on a single vDC, describing the basic components and architecture that contribute to
a resilient vDC. Azure features such as Azure load balancer, NVAs, availability sets, scale sets, along with other
mechanisms contribute to a system that allow you to build solid SL A levels into your production services.
However, a single vDC is hosted within a single region, and is vulnerable to major outage that might affect that
entire region. Customers that want to achieve high SL As need to protect the services through deployments of the
same project in two (or more) vDCs, placed in different regions.
In addition to SL A concerns, there are several common scenarios where deploying multiple vDCs makes sense:
Regional/Global presence
Disaster Recovery
Mechanism to divert traffic between DC
Regional/Global presence
Azure data centers are present in numerous regions worldwide. When selecting multiple Azure data centers,
customers need to consider two related factors: geographical distances and latency. Customers need to evaluate
the geographical distance between the vDCs and the distance between the vDC and the end users, to offer the best
user experience.
The Azure Region where vDCs are hosted also need to conform with regulatory requirements established by any
legal jurisdiction under which your organization operates.
Disaster Recovery
The implementation of a disaster recovery plan is strongly related to the type of workload concerned, and the
ability to synchronize the workload state between different vDCs. Ideally, most customers want to synchronize
application data between deployments running in two different vDCs to implement a fast fail-over mechanism.
Most applications are sensitive to latency, and that can cause potential timeout and delay in data synchronization.
Synchronization or heartbeat monitoring of applications in different vDCs requires communication between them.
Two vDCs in different regions can be connected through:
VNet Peering - VNet Peering can connect hubs across regions
ExpressRoute private peering when the vDC hubs are connected to the same ExpressRoute circuit
multiple ExpressRoute circuits connected via your corporate backbone and your vDC mesh connected to the
ExpressRoute circuits
Site-to-Site VPN connections between your vDC hubs in each Azure Region
Usually VNet Peering or ExpressRoute connections are the preferred mechanism due higher bandwidth and
consistent latency when transiting through the Microsoft backbone.
There is no magic recipe to validate an application distributed between two (or more) different vDCs located in
different regions. Customers should run network qualification tests to verify the latency and bandwidth of the
connections and target whether synchronous or asynchronous data replication is appropriate and what the optimal
recovery time objective (RTO ) can be for your workloads.
Mechanism to divert traffic between DC
One effective technique to divert the traffic incoming in one DC to another is based on DNS. Azure Traffic
Manager uses the Domain Name System (DNS ) mechanism to direct the end-user traffic to the most appropriate
public endpoint in a specific vDC. Through probes, Traffic Manager periodically checks the service health of public
endpoints in different vDCs and, in case of failure of those endpoints, it routes automatically to the secondary vDC.
Traffic Manager works on Azure public endpoints and can be used, for example, to control/divert traffic to Azure
VMs and Web Apps in the appropriate vDC. Traffic Manager is resilient even in the face of an entire Azure region
failing and can control the distribution of user traffic for service endpoints in different vDCs based on several
criteria (for instance, failure of a service in a specific vDC, or selecting the vDC with the lowest network latency for
the client).
Conclusion
The Virtual Data Center is an approach to data center migration into the cloud that uses a combination of features
and capabilities to create a scalable architecture in Azure that maximizes cloud resource use, reducing costs, and
simplifying system governance. The vDC concept is based on a hub-spokes topology, providing common shared
services in the hub and allowing specific applications/workloads in the spokes. A vDC matches the structure of
company roles, where different departments (Central IT, DevOps, operation and maintenance) work together, each
with a specific list of roles and duties. A vDC satisfies the requirements for a "Lift and Shift" migration, but also
provides many advantages to native cloud deployments.

References
The following features were discussed in this document. Click the links to learn more.

Network Features Load Balancing Connectivity

Azure Virtual Networks Azure Load Balancer (L3) VNet Peering


Network Security Groups Application Gateway (L7) Virtual Private Network
NSG Logs Web Application Firewall ExpressRoute
User Defined Routing Azure Traffic Manager
Network Virtual Appliances
Public IP Addresses
DNS

Identity Monitoring Best Practices

Azure Active Directory Azure Monitor Perimeter Networks Best Practices


Multi-Factor Authentication Activity Logs Subscription Management
Role Base Access Controls Diagnostic Logs Resource Group Management
Default AAD Roles Microsoft Operations Management Azure Subscription Limits
Suite
Network Performance Monitor

Other Azure Services


Azure Web Apps
HDInsights (Hadoop)
Event Hubs
Service Bus

Next Steps
Explore VNet Peering, the underpinning technology for vDC hub and spoke designs
Implement AAD to get started with RBAC exploration
Develop a Subscription and Resource management model and RBAC model to meet the structure,
requirements, and polices of your organization. The most important activity is planning. As much as practical,
plan for reorganizations, mergers, new product lines, etc.
Asymmetric routing with multiple network paths
6/27/2017 • 6 minutes to read • Edit Online

This article explains how forward and return network traffic might take different routes when multiple paths are
available between network source and destination.
It's important to understand two concepts to understand asymmetric routing. One is the effect of multiple network
paths. The other is how devices, like a firewall, keep state. These types of devices are called stateful devices. A
combination of these two factors creates scenarios in which network traffic is dropped by a stateful device because
the stateful device didn't detect that traffic originated with the device itself.

Multiple network paths


When an enterprise network has only one link to the Internet through their Internet service provider, all traffic to
and from the Internet travels the same path. Often, companies purchase multiple circuits, as redundant paths, to
improve network uptime. When this happens, it's possible that traffic that goes outside of the network, to the
Internet, goes through one link, and the return traffic goes through a different link. This is commonly known as
asymmetric routing. In asymmetric routing, reverse network traffic takes a different path from the original flow.

Although it primarily occurs on the Internet, asymmetric routing also applies to other combinations of multiple
paths. It applies, for example, both to an Internet path and a private path that go to the same destination, and to
multiple private paths that go to the same destination.
Each router along the way, from source to destination, computes the best path to reach a destination. The router's
determination of best possible path is based on two main factors:
Routing between external networks is based on a routing protocol, Border Gateway Protocol (BGP ). BGP takes
advertisements from neighbors and runs them through a series of steps to determine the best path to the
intended destination. It stores the best path in its routing table.
The length of a subnet mask associated with a route influences routing paths. If a router receives multiple
advertisements for the same IP address but with different subnet masks, the router prefers the advertisement
with a longer subnet mask because it's considered a more specific route.
Stateful devices
Routers look at the IP header of a packet for routing purposes. Some devices look even deeper inside the packet.
Typically, these devices look at Layer4 (Transmission Control Protocol, or TCP; or User Datagram Protocol, or
UDP ), or even Layer7 (Application Layer) headers. These kinds of devices are either security devices or bandwidth-
optimization devices.
A firewall is a common example of a stateful device. A firewall allows or denies a packet to pass through its
interfaces based on various fields such as protocol, TCP/UDP port, and URL headers. This level of packet
inspection puts a heavy processing load on the device. To improve performance, the firewall inspects the first
packet of a flow. If it allows the packet to proceed, it keeps the flow information in its state table. All subsequent
packets related to this flow are allowed based on the initial determination. A packet that is part of an existing flow
might arrive at the firewall. If the firewall has no prior state information about it, the firewall drops the packet.

Asymmetric routing with ExpressRoute


When you connect to Microsoft through Azure ExpressRoute, your network changes like this:
You have multiple links to Microsoft. One link is your existing Internet connection, and the other is via
ExpressRoute. Some traffic to Microsoft might go through the Internet but come back via ExpressRoute, or vice
versa.
You receive more specific IP addresses via ExpressRoute. So, for traffic from your network to Microsoft for
services offered via ExpressRoute, routers always prefer ExpressRoute.
To understand the effect these two changes have on a network, let’s consider some scenarios. As an example, you
have only one circuit to the Internet and you consume all Microsoft services via the Internet. The traffic from your
network to Microsoft and back traverses the same Internet link and passes through the firewall. The firewall
records the flow as it sees the first packet and return packets are allowed because the flow exists in the state table.

Then, you turn on ExpressRoute and consume services offered by Microsoft over ExpressRoute. All other services
from Microsoft are consumed over the Internet. You deploy a separate firewall at your edge that is connected to
ExpressRoute. Microsoft advertises more specific prefixes to your network over ExpressRoute for specific services.
Your routing infrastructure chooses ExpressRoute as the preferred path for those prefixes. If you are not
advertising your public IP addresses to Microsoft over ExpressRoute, Microsoft communicates with your public IP
addresses via the Internet. Forward traffic from your network to Microsoft uses ExpressRoute, and reverse traffic
from Microsoft uses the Internet. When the firewall at the edge sees a response packet for a flow that it does not
find in the state table, it drops the return traffic.
If you choose to use the same network address translation (NAT) pool for ExpressRoute and for the Internet, you'll
see similar issues with the clients in your network on private IP addresses. Requests for services like Windows
Update go via the Internet because IP addresses for these services are not advertised via ExpressRoute. However,
the return traffic comes back via ExpressRoute. If Microsoft receives an IP address with the same subnet mask from
the Internet and ExpressRoute, it prefers ExpressRoute over the Internet. If a firewall or another stateful device that
is on your network edge and facing ExpressRoute has no prior information about the flow, it drops the packets that
belong to that flow.

Asymmetric routing solutions


You have two main options to solve the problem of asymmetric routing. One is through routing, and the other is by
using source-based NAT (SNAT).
Routing
Ensure that your public IP addresses are advertised to appropriate wide area network (WAN ) links. For example, if
you want to use the Internet for authentication traffic and ExpressRoute for your mail traffic, you should not
advertise your Active Directory Federation Services (AD FS ) public IP addresses over ExpressRoute. Similarly, be
sure not to expose an on-premises AD FS server to IP addresses that the router receives over ExpressRoute.
Routes received over ExpressRoute are more specific so they make ExpressRoute the preferred path for
authentication traffic to Microsoft. This causes asymmetric routing.
If you want to use ExpressRoute for authentication, make sure that you are advertising AD FS public IP addresses
over ExpressRoute without NAT. This way, traffic that originates from Microsoft and goes to an on-premises AD FS
server goes over ExpressRoute. Return traffic from customer to Microsoft uses ExpressRoute because it's the
preferred route over the Internet.
Source -based NAT
Another way of solving asymmetric routing issues is by using SNAT. For example, you have not advertised the
public IP address of an on-premises Simple Mail Transfer Protocol (SMTP ) server over ExpressRoute because you
intend to use the Internet for this type of communication. A request that originates with Microsoft and then goes to
your on-premises SMTP server traverses the Internet. You SNAT the incoming request to an internal IP address.
Reverse traffic from the SMTP server goes to the edge firewall (which you use for NAT) instead of through
ExpressRoute. The return traffic goes back via the Internet.
Asymmetric routing detection
Traceroute is the best way to make sure that your network traffic is traversing the expected path. If you expect
traffic from your on-premises SMTP server to Microsoft to take the Internet path, the expected traceroute is from
the SMTP server to Office 365. The result validates that traffic is indeed leaving your network toward the Internet
and not toward ExpressRoute.
Microsoft cloud services and network security
5/21/2018 • 37 minutes to read • Edit Online

Microsoft cloud services deliver hyper-scale services and infrastructure, enterprise-grade capabilities, and many
choices for hybrid connectivity. Customers can choose to access these services either via the Internet or with Azure
ExpressRoute, which provides private network connectivity. The Microsoft Azure platform allows customers to
seamlessly extend their infrastructure into the cloud and build multi-tier architectures. Additionally, third parties
can enable enhanced capabilities by offering security services and virtual appliances. This white paper provides an
overview of security and architectural issues that customers should consider when using Microsoft cloud services
accessed via ExpressRoute. It also covers creating more secure services in Azure virtual networks.

Fast start
The following logic chart can direct you to a specific example of the many security techniques available with the
Azure platform. For quick reference, find the example that best fits your case. For expanded explanations, continue
reading through the paper.

Example 1: Build a perimeter network (also known as DMZ, demilitarized zone, or screened subnet) to help protect
applications with network security groups (NSGs).
Example 2: Build a perimeter network to help protect applications with a firewall and NSGs.
Example 3: Build a perimeter network to help protect networks with a firewall, user-defined route (UDR ), and NSG.
Example 4: Add a hybrid connection with a site-to-site, virtual appliance virtual private network (VPN ).
Example 5: Add a hybrid connection with a site-to-site, Azure VPN gateway.
Example 6: Add a hybrid connection with ExpressRoute.
Examples for adding connections between virtual networks, high availability, and service chaining will be added to
this document over the next few months.

Microsoft compliance and infrastructure protection


To help organizations comply with national, regional, and industry-specific requirements governing the collection
and use of individuals’ data, Microsoft offers over 40 certifications and attestations. The most comprehensive set of
any cloud service provider.
For more information, see the compliance information on the Microsoft Trust Center.
Microsoft has a comprehensive approach to protect cloud infrastructure needed to run hyper-scale global services.
Microsoft cloud infrastructure includes hardware, software, networks, and administrative and operations staff, in
addition to the physical data centers.

This approach provides a more secure foundation for customers to deploy their services in the Microsoft cloud.
The next step is for customers to design and create a security architecture to protect these services.

Traditional security architectures and perimeter networks


Although Microsoft invests heavily in protecting the cloud infrastructure, customers must also protect their cloud
services and resource groups. A multilayered approach to security provides the best defense. A perimeter network
security zone protects internal network resources from an untrusted network. A perimeter network refers to the
edges or parts of the network that sit between the Internet and the protected enterprise IT infrastructure.
In typical enterprise networks, the core infrastructure is heavily fortified at the perimeters, with multiple layers of
security devices. The boundary of each layer consists of devices and policy enforcement points. Each layer can
include a combination of the following network security devices: firewalls, Denial of Service (DoS ) prevention,
Intrusion Detection or Protection Systems (IDS/IPS ), and VPN devices. Policy enforcement can take the form of
firewall policies, access control lists (ACLs), or specific routing. The first line of defense in the network, directly
accepting incoming traffic from the Internet, is a combination of these mechanisms to block attacks and harmful
traffic while allowing legitimate requests further into the network. This traffic routes directly to resources in the
perimeter network. That resource may then “talk” to resources deeper in the network, transiting the next boundary
for validation first. The outermost layer is called the perimeter network because this part of the network is exposed
to the Internet, usually with some form of protection on both sides. The following figure shows an example of a
single subnet perimeter network in a corporate network, with two security boundaries.
There are many architectures used to implement a perimeter network. These architectures can range from a simple
load balancer to a multiple-subnet perimeter network with varied mechanisms at each boundary to block traffic
and protect the deeper layers of the corporate network. How the perimeter network is built depends on the specific
needs of the organization and its overall risk tolerance.
As customers move their workloads to public clouds, it is critical to support similar capabilities for perimeter
network architecture in Azure to meet compliance and security requirements. This document provides guidelines
on how customers can build a secure network environment in Azure. It focuses on the perimeter network, but also
includes a comprehensive discussion of many aspects of network security. The following questions inform this
discussion:
How can a perimeter network in Azure be built?
What are some of the Azure features available to build the perimeter network?
How can back-end workloads be protected?
How are Internet communications controlled to the workloads in Azure?
How can the on-premises networks be protected from deployments in Azure?
When should native Azure security features be used versus third-party appliances or services?
The following diagram shows various layers of security Azure provides to customers. These layers are both native
in the Azure platform itself and customer-defined features:

Inbound from the Internet, Azure DDoS helps protect against large-scale attacks against Azure. The next layer is
customer-defined public IP addresses (endpoints), which are used to determine which traffic can pass through the
cloud service to the virtual network. Native Azure virtual network isolation ensures complete isolation from all
other networks and that traffic only flows through user configured paths and methods. These paths and methods
are the next layer, where NSGs, UDR, and network virtual appliances can be used to create security boundaries to
protect the application deployments in the protected network.
The next section provides an overview of Azure virtual networks. These virtual networks are created by customers,
and are what their deployed workloads are connected to. Virtual networks are the basis of all the network security
features required to establish a perimeter network to protect customer deployments in Azure.

Overview of Azure virtual networks


Before Internet traffic can get to the Azure virtual networks, there are two layers of security inherent to the Azure
platform:
1. DDoS protection: DDoS protection is a layer of the Azure physical network that protects the Azure
platform itself from large-scale Internet-based attacks. These attacks use multiple “bot” nodes in an attempt
to overwhelm an Internet service. Azure has a robust DDoS protection mesh on all inbound, outbound, and
cross-Azure region connectivity. This DDoS protection layer has no user configurable attributes and is not
accessible to the customer. The DDoS protection layer protects Azure as a platform from large-scale attacks,
it also monitors out-bound traffic and cross-Azure region traffic. Using network virtual appliances on the
VNet, additional layers of resilience can be configured by the customer against a smaller scale attack that
doesn't trip the platform level protection. An example of DDoS in action; if an internet facing IP address was
attacked by a large-scale DDoS attack, Azure would detect the sources of the attacks and scrub the
offending traffic before it reached its intended destination. In almost all cases, the attacked endpoint isn't
affected by the attack. In the rare cases that an endpoint is affected, no traffic is affected to other endpoints,
only the attacked endpoint. Thus other customers and services would see no impact from that attack. It's
critical to note that Azure DDoS is only looking for large-scale attacks. It is possible that your specific service
could be overwhelmed before the platform level protection thresholds are exceeded. For example, a web site
on a single A0 IIS server, could be taken offline by a DDoS attack before Azure platform level DDoS
protection registered a threat.
2. Public IP Addresses: Public IP addresses (enabled via service endpoints, Public IP addresses, Application
Gateway, and other Azure features that present a public IP address to the internet routed to your resource)
allow cloud services or resource groups to have public Internet IP addresses and ports exposed. The
endpoint uses Network Address Translation (NAT) to route traffic to the internal address and port on the
Azure virtual network. This path is the primary way for external traffic to pass into the virtual network. The
Public IP addresses are configurable to determine which traffic is passed in, and how and where it's
translated on to the virtual network.
Once traffic reaches the virtual network, there are many features that come into play. Azure virtual networks are
the foundation for customers to attach their workloads and where basic network-level security applies. It is a
private network (a virtual network overlay) in Azure for customers with the following features and characteristics:
Traffic isolation: A virtual network is the traffic isolation boundary on the Azure platform. Virtual machines
(VMs) in one virtual network cannot communicate directly to VMs in a different virtual network, even if both
virtual networks are created by the same customer. Isolation is a critical property that ensures customer VMs
and communication remains private within a virtual network.

NOTE
Traffic isolation refers only to traffic inbound to the virtual network. By default outbound traffic from the VNet to the internet
is allowed, but can be prevented if desired by NSGs.
Multi-tier topology: Virtual networks allow customers to define multi-tier topology by allocating subnets and
designating separate address spaces for different elements or “tiers” of their workloads. These logical groupings
and topologies enable customers to define different access policy based on the workload types, and also control
traffic flows between the tiers.
Cross-premises connectivity: Customers can establish cross-premises connectivity between a virtual network
and multiple on-premises sites or other virtual networks in Azure. To construct a connection, customers can use
VNet Peering, Azure VPN Gateways, third-party network virtual appliances, or ExpressRoute. Azure supports
site-to-site (S2S ) VPNs using standard IPsec/IKE protocols and ExpressRoute private connectivity.
NSG allows customers to create rules (ACLs) at the desired level of granularity: network interfaces, individual
VMs, or virtual subnets. Customers can control access by permitting or denying communication between the
workloads within a virtual network, from systems on customer’s networks via cross-premises connectivity, or
direct Internet communication.
UDR and IP Forwarding allow customers to define the communication paths between different tiers within a
virtual network. Customers can deploy a firewall, IDS/IPS, and other virtual appliances, and route network
traffic through these security appliances for security boundary policy enforcement, auditing, and inspection.
Network virtual appliances in the Azure Marketplace: Security appliances such as firewalls, load balancers,
and IDS/IPS are available in the Azure Marketplace and the VM Image Gallery. Customers can deploy these
appliances into their virtual networks, and specifically, at their security boundaries (including the perimeter
network subnets) to complete a multi-tiered secure network environment.
With these features and capabilities, one example of how a perimeter network architecture could be constructed in
Azure is the following diagram:

Perimeter network characteristics and requirements


The perimeter network is the front end of the network, directly interfacing communication from the Internet. The
incoming packets should flow through the security appliances, such as the firewall, IDS, and IPS, before reaching
the back-end servers. Internet-bound packets from the workloads can also flow through the security appliances in
the perimeter network for policy enforcement, inspection, and auditing purposes, before leaving the network.
Additionally, the perimeter network can host cross-premises VPN gateways between customer virtual networks
and on-premises networks.
Perimeter network characteristics
Referencing the previous figure, some of the characteristics of a good perimeter network are as follows:
Internet-facing:
The perimeter network subnet itself is Internet-facing, directly communicating with the Internet.
Public IP addresses, VIPs, and/or service endpoints pass Internet traffic to the front-end network and
devices.
Inbound traffic from the Internet passes through security devices before other resources on the front-end
network.
If outbound security is enabled, traffic passes through security devices, as the final step, before passing to
the Internet.
Protected network:
There is no direct path from the Internet to the core infrastructure.
Channels to the core infrastructure must traverse through security devices such as NSGs, firewalls, or
VPN devices.
Other devices must not bridge Internet and the core infrastructure.
Security devices on both the Internet-facing and the protected network facing boundaries of the
perimeter network (for example, the two firewall icons shown in the previous figure) may actually be a
single virtual appliance with differentiated rules or interfaces for each boundary. For example, one
physical device, logically separated, handling the load for both boundaries of the perimeter network.
Other common practices and constraints:
Workloads must not store business critical information.
Access and updates to perimeter network configurations and deployments are limited to only authorized
administrators.
Perimeter network requirements
To enable these characteristics, follow these guidelines on virtual network requirements to implement a successful
perimeter network:
Subnet architecture: Specify the virtual network such that an entire subnet is dedicated as the perimeter
network, separated from other subnets in the same virtual network. This separation ensures the traffic between
the perimeter network and other internal or private subnet tiers flows through a firewall or IDS/IPS virtual
appliance. User-defined routes on the boundary subnets are required to forward this traffic to the virtual
appliance.
NSG: The perimeter network subnet itself should be open to allow communication with the Internet, but that
does not mean customers should be bypassing NSGs. Follow common security practices to minimize the
network surfaces exposed to the Internet. Lock down the remote address ranges allowed to access the
deployments or the specific application protocols and ports that are open. There may be circumstances,
however, in which a complete lock-down is not possible. For example, if customers have an external website in
Azure, the perimeter network should allow the incoming web requests from any public IP addresses, but should
only open the web application ports: TCP on port 80 and/or TCP on port 443.
Routing table: The perimeter network subnet itself should be able to communicate to the Internet directly, but
should not allow direct communication to and from the back end or on-premises networks without going
through a firewall or security appliance.
Security appliance configuration: To route and inspect packets between the perimeter network and the rest
of the protected networks, the security appliances such as firewall, IDS, and IPS devices may be multi-homed.
They may have separate NICs for the perimeter network and the back-end subnets. The NICs in the perimeter
network communicate directly to and from the Internet, with the corresponding NSGs and the perimeter
network routing table. The NICs connecting to the back-end subnets have more restricted NSGs and routing
tables of the corresponding back-end subnets.
Security appliance functionality: The security appliances deployed in the perimeter network typically
perform the following functionality:
Firewall: Enforcing firewall rules or access control policies for the incoming requests.
Threat detection and prevention: Detecting and mitigating malicious attacks from the Internet.
Auditing and logging: Maintaining detailed logs for auditing and analysis.
Reverse proxy: Redirecting the incoming requests to the corresponding back-end servers. This
redirection involves mapping and translating the destination addresses on the front-end devices, typically
firewalls, to the back-end server addresses.
Forward proxy: Providing NAT and performing auditing for communication initiated from within the
virtual network to the Internet.
Router: Forwarding incoming and cross-subnet traffic inside the virtual network.
VPN device: Acting as the cross-premises VPN gateways for cross-premises VPN connectivity between
customer on-premises networks and Azure virtual networks.
VPN server: Accepting VPN clients connecting to Azure virtual networks.

TIP
Keep the following two groups separate: the individuals authorized to access the perimeter network security gear and the
individuals authorized as application development, deployment, or operations administrators. Keeping these groups separate
allows for a segregation of duties and prevents a single person from bypassing both applications security and network
security controls.

Questions to be asked when building network boundaries


In this section, unless specifically mentioned, the term "networks" refers to private Azure virtual networks created
by a subscription administrator. The term doesn't refer to the underlying physical networks within Azure.
Also, Azure virtual networks are often used to extend traditional on-premises networks. It is possible to incorporate
either site-to-site or ExpressRoute hybrid networking solutions with perimeter network architectures. This hybrid
link is an important consideration in building network security boundaries.
The following three questions are critical to answer when you're building a network with a perimeter network and
multiple security boundaries.
1) How many boundaries are needed?
The first decision point is to decide how many security boundaries are needed in a given scenario:
A single boundary: One on the front-end perimeter network, between the virtual network and the Internet.
Two boundaries: One on the Internet side of the perimeter network, and another between the perimeter
network subnet and the back-end subnets in the Azure virtual networks.
Three boundaries: One on the Internet side of the perimeter network, one between the perimeter network and
back-end subnets, and one between the back-end subnets and the on-premises network.
N boundaries: A variable number. Depending on security requirements, there is no limit to the number of
security boundaries that can be applied in a given network.
The number and type of boundaries needed vary based on a company’s risk tolerance and the specific scenario
being implemented. This decision is often made together with multiple groups within an organization, often
including a risk and compliance team, a network and platform team, and an application development team. People
with knowledge of security, the data involved, and the technologies being used should have a say in this decision to
ensure the appropriate security stance for each implementation.

TIP
Use the smallest number of boundaries that satisfy the security requirements for a given situation. With more boundaries,
operations and troubleshooting can be more difficult, as well as the management overhead involved with managing the
multiple boundary policies over time. However, insufficient boundaries increase risk. Finding the balance is critical.
The preceding figure shows a high-level view of a three security boundary network. The boundaries are between
the perimeter network and the Internet, the Azure front-end and back-end private subnets, and the Azure back-end
subnet and the on-premises corporate network.
2) Where are the boundaries located?
Once the number of boundaries are decided, where to implement them is the next decision point. There are
generally three choices:
Using an Internet-based intermediary service (for example, a cloud-based Web application firewall, which is not
discussed in this document)
Using native features and/or network virtual appliances in Azure
Using physical devices on the on-premises network
On purely Azure networks, the options are native Azure features (for example, Azure Load Balancers) or network
virtual appliances from the rich partner ecosystem of Azure (for example, Check Point firewalls).
If a boundary is needed between Azure and an on-premises network, the security devices can reside on either side
of the connection (or both sides). Thus a decision must be made on the location to place security gear.
In the previous figure, the Internet-to-perimeter network and the front-to-back-end boundaries are entirely
contained within Azure, and must be either native Azure features or network virtual appliances. Security devices on
the boundary between Azure (back-end subnet) and the corporate network could be either on the Azure side or the
on-premises side, or even a combination of devices on both sides. There can be significant advantages and
disadvantages to either option that must be seriously considered.
For example, using existing physical security gear on the on-premises network side has the advantage that no new
gear is needed. It just needs reconfiguration. The disadvantage, however, is that all traffic must come back from
Azure to the on-premises network to be seen by the security gear. Thus Azure-to-Azure traffic could incur
significant latency, and affect application performance and user experience, if it was forced back to the on-premises
network for security policy enforcement.
3) How are the boundaries implemented?
Each security boundary will likely have different capability requirements (for example, IDS and firewall rules on the
Internet side of the perimeter network, but only ACLs between the perimeter network and back-end subnet).
Deciding on which device (or how many devices) to use depends on the scenario and security requirements. In the
following section, examples 1, 2, and 3 discuss some options that could be used. Reviewing the Azure native
network features and the devices available in Azure from the partner ecosystem shows the myriad options
available to solve virtually any scenario.
Another key implementation decision point is how to connect the on-premises network with Azure. Should you
use the Azure virtual gateway or a network virtual appliance? These options are discussed in greater detail in the
following section (examples 4, 5, and 6).
Additionally, traffic between virtual networks within Azure may be needed. These scenarios will be added in the
future.
Once you know the answers to the previous questions, the Fast Start section can help identify which examples are
most appropriate for a given scenario.

Examples: Building security boundaries with Azure virtual networks


Example 1 Build a perimeter network to help protect applications with NSGs
Back to Fast start | Detailed build instructions for this example

Environment description
In this example, there is a subscription that contains the following resources:
A single resource group
A virtual network with two subnets: “FrontEnd” and “BackEnd”
A Network Security Group that is applied to both subnets
A Windows server that represents an application web server (“IIS01”)
Two Windows servers that represent application back-end servers (“AppVM01”, “AppVM02”)
A Windows server that represents a DNS server (“DNS01”)
A public IP associated with the application web server
For scripts and an Azure Resource Manager template, see the detailed build instructions.
NSG description
In this example, an NSG group is built and then loaded with six rules.

TIP
Generally speaking, you should create your specific “Allow” rules first, followed by the more generic “Deny” rules. The given
priority dictates which rules are evaluated first. Once traffic is found to apply to a specific rule, no further rules are evaluated.
NSG rules can apply in either the inbound or outbound direction (from the perspective of the subnet).

Declaratively, the following rules are being built for inbound traffic:
1. Internal DNS traffic (port 53) is allowed.
2. RDP traffic (port 3389) from the Internet to any Virtual Machine is allowed.
3. HTTP traffic (port 80) from the Internet to web server (IIS01) is allowed.
4. Any traffic (all ports) from IIS01 to AppVM1 is allowed.
5. Any traffic (all ports) from the Internet to the entire virtual network (both subnets) is denied.
6. Any traffic (all ports) from the front-end subnet to the back-end subnet is denied.
With these rules bound to each subnet, if an HTTP request was inbound from the Internet to the web server, both
rules 3 (allow ) and 5 (deny) would apply. But because rule 3 has a higher priority, only it would apply, and rule 5
would not come into play. Thus the HTTP request would be allowed to the web server. If that same traffic was
trying to reach the DNS01 server, rule 5 (deny) would be the first to apply, and the traffic would not be allowed to
pass to the server. Rule 6 (deny) blocks the front-end subnet from talking to the back-end subnet (except for
allowed traffic in rules 1 and 4). This rule-set protects the back-end network in case an attacker compromises the
web application on the front end. The attacker would have limited access to the back-end “protected” network (only
to resources exposed on the AppVM01 server).
There is a default outbound rule that allows traffic out to the Internet. For this example, we’re allowing outbound
traffic and not modifying any outbound rules. To lock down traffic in both directions, user-defined routing is
required (see example 3).
Conclusion
This example is a relatively simple and straightforward way of isolating the back-end subnet from inbound traffic.
For more information, see the detailed build instructions. These instructions include:
How to build this perimeter network with classic PowerShell scripts.
How to build this perimeter network with an Azure Resource Manager template.
Detailed descriptions of each NSG command.
Detailed traffic flow scenarios, showing how traffic is allowed or denied in each layer.
Example 2 Build a perimeter network to help protect applications with a firewall and NSGs
Back to Fast start | Detailed build instructions for this example
Environment description
In this example, there is a subscription that contains the following resources:
A single resource group
A virtual network with two subnets: “FrontEnd” and “BackEnd”
A Network Security Group that is applied to both subnets
A network virtual appliance, in this case a firewall, connected to the front-end subnet
A Windows server that represents an application web server (“IIS01”)
Two Windows servers that represent application back-end servers (“AppVM01”, “AppVM02”)
A Windows server that represents a DNS server (“DNS01”)
For scripts and an Azure Resource Manager template, see the detailed build instructions.
NSG description
In this example, an NSG group is built and then loaded with six rules.

TIP
Generally speaking, you should create your specific “Allow” rules first, followed by the more generic “Deny” rules. The given
priority dictates which rules are evaluated first. Once traffic is found to apply to a specific rule, no further rules are evaluated.
NSG rules can apply in either the inbound or outbound direction (from the perspective of the subnet).

Declaratively, the following rules are being built for inbound traffic:
1. Internal DNS traffic (port 53) is allowed.
2. RDP traffic (port 3389) from the Internet to any Virtual Machine is allowed.
3. Any Internet traffic (all ports) to the network virtual appliance (firewall) is allowed.
4. Any traffic (all ports) from IIS01 to AppVM1 is allowed.
5. Any traffic (all ports) from the Internet to the entire virtual network (both subnets) is denied.
6. Any traffic (all ports) from the front-end subnet to the back-end subnet is denied.
With these rules bound to each subnet, if an HTTP request was inbound from the Internet to the firewall, both rules
3 (allow ) and 5 (deny) would apply. But because rule 3 has a higher priority, only it would apply, and rule 5 would
not come into play. Thus the HTTP request would be allowed to the firewall. If that same traffic was trying to reach
the IIS01 server, even though it’s on the front-end subnet, rule 5 (deny) would apply, and the traffic would not be
allowed to pass to the server. Rule 6 (deny) blocks the front-end subnet from talking to the back-end subnet (except
for allowed traffic in rules 1 and 4). This rule-set protects the back-end network in case an attacker compromises
the web application on the front end. The attacker would have limited access to the back-end “protected” network
(only to resources exposed on the AppVM01 server).
There is a default outbound rule that allows traffic out to the Internet. For this example, we’re allowing outbound
traffic and not modifying any outbound rules. To lock down traffic in both directions, user-defined routing is
required (see example 3).
Firewall rule description
On the firewall, forwarding rules should be created. Since this example only routes Internet traffic in-bound to the
firewall and then to the web server, only one forwarding network address translation (NAT) rule is needed.
The forwarding rule accepts any inbound source address that hits the firewall trying to reach HTTP (port 80 or 443
for HTTPS ). It's sent out of the firewall’s local interface and redirected to the web server with the IP Address of
10.0.1.5.
Conclusion
This example is a relatively straightforward way of protecting your application with a firewall and isolating the
back-end subnet from inbound traffic. For more information, see the detailed build instructions. These instructions
include:
How to build this perimeter network with classic PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed descriptions of each NSG command and firewall rule.
Detailed traffic flow scenarios, showing how traffic is allowed or denied in each layer.
Example 3 Build a perimeter network to help protect networks with a firewall and UDR and NSG
Back to Fast start | Detailed build instructions for this example
Environment description
In this example, there is a subscription that contains the following resources:
A single resource group
A virtual network with three subnets: “SecNet”, “FrontEnd”, and “BackEnd”
A network virtual appliance, in this case a firewall, connected to the SecNet subnet
A Windows server that represents an application web server (“IIS01”)
Two Windows servers that represent application back-end servers (“AppVM01”, “AppVM02”)
A Windows server that represents a DNS server (“DNS01”)
For scripts and an Azure Resource Manager template, see the detailed build instructions.
UDR description
By default, the following system routes are defined as:

Effective routes :
Address Prefix Next hop type Next hop IP address Status Source
-------------- ------------- ------------------- ------ ------
{10.0.0.0/16} VNETLocal Active Default
{0.0.0.0/0} Internet Active Default
{10.0.0.0/8} Null Active Default
{100.64.0.0/10} Null Active Default
{172.16.0.0/12} Null Active Default
{192.168.0.0/16} Null Active Default

The VNETLocal is always one or more defined address prefixes that make up the virtual network for that specific
network (that is, it changes from virtual network to virtual network, depending on how each specific virtual
network is defined). The remaining system routes are static and default as indicated in the table.
In this example, two routing tables are created, one each for the front-end and back-end subnets. Each table is
loaded with static routes appropriate for the given subnet. In this example, each table has three routes that direct all
traffic (0.0.0.0/0) through the firewall (Next hop = Virtual Appliance IP address):
1. Local subnet traffic with no Next Hop defined to allow local subnet traffic to bypass the firewall.
2. Virtual network traffic with a Next Hop defined as firewall. This next hop overrides the default rule that allows
local virtual network traffic to route directly.
3. All remaining traffic (0/0) with a Next Hop defined as the firewall.

TIP
Not having the local subnet entry in the UDR breaks local subnet communications.
In our example, 10.0.1.0/24 pointing to VNETLocal is critical! Without it, packet leaving the Web Server (10.0.1.4) destined
to another local server (for example) 10.0.1.25 will fail as they will be sent to the NVA. The NVA will send it to the subnet,
and the subnet will resend it to the NVA in an infinite loop.
The chances of a routing loop are typically higher on appliances with multiple NICs that are connected to separate
subnets, which is often of traditional, on-premises appliances.

Once the routing tables are created, they must be bound to their subnets. The front-end subnet routing table, once
created and bound to the subnet, would look like this output:

Effective routes :
Address Prefix Next hop type Next hop IP address Status Source
-------------- ------------- ------------------- ------ ------
{10.0.1.0/24} VNETLocal Active
{10.0.0.0/16} VirtualAppliance 10.0.0.4 Active
{0.0.0.0/0} VirtualAppliance 10.0.0.4 Active

NOTE
UDR can now be applied to the gateway subnet on which the ExpressRoute circuit is connected.
Examples of how to enable your perimeter network with ExpressRoute or site-to-site networking are shown in examples 3
and 4.

IP Forwarding description
IP Forwarding is a companion feature to UDR. IP Forwarding is a setting on a virtual appliance that allows it to
receive traffic not specifically addressed to the appliance, and then forward that traffic to its ultimate destination.
For example, if AppVM01 makes a request to the DNS01 server, UDR would route this traffic to the firewall. With
IP Forwarding enabled, the traffic for the DNS01 destination (10.0.2.4) is accepted by the appliance (10.0.0.4) and
then forwarded to its ultimate destination (10.0.2.4). Without IP Forwarding enabled on the firewall, traffic would
not be accepted by the appliance even though the route table has the firewall as the next hop. To use a virtual
appliance, it’s critical to remember to enable IP Forwarding along with UDR.
NSG description
In this example, an NSG group is built and then loaded with a single rule. This group is then bound only to the
front-end and back-end subnets (not the SecNet). Declaratively the following rule is being built:
Any traffic (all ports) from the Internet to the entire virtual network (all subnets) is denied.
Although NSGs are used in this example, its main purpose is as a secondary layer of defense against manual
misconfiguration. The goal is to block all inbound traffic from the Internet to either the front-end or back-end
subnets. Traffic should only flow through the SecNet subnet to the firewall (and then, if appropriate, on to the front-
end or back-end subnets). Plus, with the UDR rules in place, any traffic that did make it into the front-end or back-
end subnets would be directed out to the firewall (thanks to UDR ). The firewall would see this traffic as an
asymmetric flow and would drop the outbound traffic. Thus there are three layers of security protecting the
subnets:
No Public IP addresses on any FrontEnd or BackEnd NICs.
NSGs denying traffic from the Internet.
The firewall dropping asymmetric traffic.
One interesting point regarding the NSG in this example is that it contains only one rule, which is to deny Internet
traffic to the entire virtual network, including the Security subnet. However, since the NSG is only bound to the
front-end and back-end subnets, the rule isn’t processed on traffic inbound to the Security subnet. As a result,
traffic flows to the Security subnet.
Firewall rules
On the firewall, forwarding rules should be created. Since the firewall is blocking or forwarding all inbound,
outbound, and intra-virtual network traffic, many firewall rules are needed. Also, all inbound traffic hits the Security
Service public IP address (on different ports), to be processed by the firewall. A best practice is to diagram the
logical flows before setting up the subnets and firewall rules, to avoid rework later. The following figure is a logical
view of the firewall rules for this example:

NOTE
Based on the Network Virtual Appliance used, the management ports vary. In this example, a Barracuda NextGen Firewall is
referenced, which uses ports 22, 801, and 807. Consult the appliance vendor documentation to find the exact ports used for
management of the device being used.

Firewall rules description


In the preceding logical diagram, the security subnet is not shown because the firewall is the only resource on that
subnet. The diagram is showing the firewall rules and how they logically allow or deny traffic flows, not the actual
routed path. Also, the external ports selected for the RDP traffic are higher ranged ports (8014 – 8026) and were
selected to loosely align with the last two octets of the local IP address for easier readability (for example, local
server address 10.0.1.4 is associated with external port 8014). Any higher non-conflicting ports, however, could be
used.
For this example, we need seven types of rules:
External rules (for inbound traffic):
1. Firewall management rule: This App Redirect rule allows traffic to pass to the management ports of the
network virtual appliance.
2. RDP rules (for each Windows server): These four rules (one for each server) allow management of the
individual servers via RDP. The four RDP rules could also be collapsed into one rule, depending on the
capabilities of the network virtual appliance being used.
3. Application traffic rules: There are two of these rules, the first for the front-end web traffic, and the
second for the back-end traffic (for example, web server to data tier). The configuration of these rules
depends on the network architecture (where your servers are placed) and traffic flows (which direction
the traffic flows, and which ports are used).
The first rule allows the actual application traffic to reach the application server. While the other
rules allow for security and management, application traffic rules are what allow external users or
services to access the applications. For this example, there is a single web server on port 80. Thus
a single firewall application rule redirects inbound traffic to the external IP, to the web servers
internal IP address. The redirected traffic session would be translated via NAT to the internal
server.
The second rule is the back-end rule to allow the web server to talk to the AppVM01 server (but
not AppVM02) via any port.
Internal rules (for intra-virtual network traffic)
1. Outbound to Internet rule: This rule allows traffic from any network to pass to the selected networks.
This rule is usually a default rule already on the firewall, but in a disabled state. This rule should be
enabled for this example.
2. DNS rule: This rule allows only DNS (port 53) traffic to pass to the DNS server. For this environment,
most traffic from the front end to the back end is blocked. This rule specifically allows DNS from any
local subnet.
3. Subnet to subnet rule: This rule is to allow any server on the back-end subnet to connect to any server on
the front-end subnet (but not the reverse).
Fail-safe rule (for traffic that doesn’t meet any of the previous):
1. Deny all traffic rule: This deny rule should always be the final rule (in terms of priority), and as such if a
traffic flow fails to match any of the preceding rules it is dropped by this rule. This rule is a default rule
and usually in-place and active. No modifications are usually needed to this rule.

TIP
On the second application traffic rule, to simplify this example, any port is allowed. In a real scenario, the most specific port
and address ranges should be used to reduce the attack surface of this rule.

Once the previous rules are created, it’s important to review the priority of each rule to ensure traffic is allowed or
denied as desired. For this example, the rules are in priority order.
Conclusion
This example is a more complex but complete way of protecting and isolating the network than the previous
examples. (Example 2 protects just the application, and Example 1 just isolates subnets). This design allows for
monitoring traffic in both directions, and protects not just the inbound application server but enforces network
security policy for all servers on this network. Also, depending on the appliance used, full traffic auditing and
awareness can be achieved. For more information, see the detailed build instructions. These instructions include:
How to build this example perimeter network with classic PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed descriptions of each UDR, NSG command, and firewall rule.
Detailed traffic flow scenarios, showing how traffic is allowed or denied in each layer.
Example 4 Add a hybrid connection with a site -to -site, virtual appliance VPN
Back to Fast start | Detailed build instructions available soon

Environment description
Hybrid networking using a network virtual appliance (NVA) can be added to any of the perimeter network types
described in examples 1, 2, or 3.
As shown in the previous figure, a VPN connection over the Internet (site-to-site) is used to connect an on-
premises network to an Azure virtual network via an NVA.

NOTE
If you use ExpressRoute with the Azure Public Peering option enabled, a static route should be created. This static route
should route to the NVA VPN IP address out your corporate Internet and not via the ExpressRoute connection. The NAT
required on the ExpressRoute Azure Public Peering option can break the VPN session.

Once the VPN is in place, the NVA becomes the central hub for all networks and subnets. The firewall forwarding
rules determine which traffic flows are allowed, are translated via NAT, are redirected, or are dropped (even for
traffic flows between the on-premises network and Azure).
Traffic flows should be considered carefully, as they can be optimized or degraded by this design pattern,
depending on the specific use case.
Using the environment built in example 3, and then adding a site-to-site VPN hybrid network connection, produces
the following design:
The on-premises router, or any other network device that is compatible with your NVA for VPN, would be the VPN
client. This physical device would be responsible for initiating and maintaining the VPN connection with your NVA.
Logically to the NVA, the network looks like four separate “security zones” with the rules on the NVA being the
primary director of traffic between these zones:

Conclusion
The addition of a site-to-site VPN hybrid network connection to an Azure virtual network can extend the on-
premises network into Azure in a secure manner. In using a VPN connection, your traffic is encrypted and routes
via the Internet. The NVA in this example provides a central location to enforce and manage the security policy. For
more information, see the detailed build instructions (forthcoming). These instructions include:
How to build this example perimeter network with PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed traffic flow scenarios, showing how traffic flows through this design.
Example 5 Add a hybrid connection with a site -to -site, Azure VPN gateway
Back to Fast start | Detailed build instructions available soon
Environment description
Hybrid networking using an Azure VPN gateway can be added to either perimeter network type described in
examples 1 or 2.
As shown in the preceding figure, a VPN connection over the Internet (site-to-site) is used to connect an on-
premises network to an Azure virtual network via an Azure VPN gateway.

NOTE
If you use ExpressRoute with the Azure Public Peering option enabled, a static route should be created. This static route
should route to the NVA VPN IP address out your corporate Internet and not via the ExpressRoute WAN. The NAT required
on the ExpressRoute Azure Public Peering option can break the VPN session.

The following figure shows the two network edges in this example. On the first edge, the NVA and NSGs control
traffic flows for intra-Azure networks and between Azure and the Internet. The second edge is the Azure VPN
gateway, which is a separate and isolated network edge between on-premises and Azure.
Traffic flows should be considered carefully, as they can be optimized or degraded by this design pattern,
depending on the specific use case.
Using the environment built in example 1, and then adding a site-to-site VPN hybrid network connection, produces
the following design:
Conclusion
The addition of a site-to-site VPN hybrid network connection to an Azure virtual network can extend the on-
premises network into Azure in a secure manner. Using the native Azure VPN gateway, your traffic is IPSec
encrypted and routes via the Internet. Also, using the Azure VPN gateway can provide a lower-cost option (no
additional licensing cost as with third-party NVAs). This option is most economical in example 1, where no NVA is
used. For more information, see the detailed build instructions (forthcoming). These instructions include:
How to build this example perimeter network with PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed traffic flow scenarios, showing how traffic flows through this design.
Example 6 Add a hybrid connection with ExpressRoute
Back to Fast start | Detailed build instructions available soon
Environment description
Hybrid networking using an ExpressRoute private peering connection can be added to either perimeter network
type described in examples 1 or 2.
As shown in the preceding figure, ExpressRoute private peering provides a direct connection between your on-
premises network and the Azure virtual network. Traffic transits only the service provider network and the
Microsoft Azure network, never touching the Internet.

TIP
Using ExpressRoute keeps corporate network traffic off the Internet. It also allows for service level agreements from your
ExpressRoute provider. The Azure Gateway can pass up to 10 Gbps with ExpressRoute, whereas with site-to-site VPNs, the
Azure Gateway maximum throughput is 200 Mbps.

As seen in the following diagram, with this option the environment now has two network edges. The NVA and
NSG control traffic flows for intra-Azure networks and between Azure and the Internet, while the gateway is a
separate and isolated network edge between on-premises and Azure.
Traffic flows should be considered carefully, as they can be optimized or degraded by this design pattern,
depending on the specific use case.
Using the environment built in example 1, and then adding an ExpressRoute hybrid network connection, produces
the following design:
Conclusion
The addition of an ExpressRoute Private Peering network connection can extend the on-premises network into
Azure in a secure, lower latency, higher performing manner. Also, using the native Azure Gateway, as in this
example, provides a lower-cost option (no additional licensing as with third-party NVAs). For more information,
see the detailed build instructions (forthcoming). These instructions include:
How to build this example perimeter network with PowerShell scripts.
How to build this example with an Azure Resource Manager template.
Detailed traffic flow scenarios, showing how traffic flows through this design.

References
Helpful websites and documentation
Access Azure with Azure Resource Manager:
Accessing Azure with PowerShell: https://docs.microsoft.com/powershell/azureps-cmdlets-docs/
Virtual networking documentation: https://docs.microsoft.com/azure/virtual-network/
Network security group documentation: https://docs.microsoft.com/azure/virtual-network/virtual-networks-
nsg
User-defined routing documentation: https://docs.microsoft.com/azure/virtual-network/virtual-networks-udr-
overview
Azure virtual gateways: https://docs.microsoft.com/azure/vpn-gateway/
Site-to-Site VPNs: https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-create-site-to-site-rm-
powershell
ExpressRoute documentation (be sure to check out the “Getting Started” and “How To” sections):
https://docs.microsoft.com/azure/expressroute/
Azure Network Security Best Practices
5/21/2018 • 17 minutes to read • Edit Online

Microsoft Azure enables you to connect virtual machines and appliances to other networked devices by placing
them on Azure Virtual Networks. An Azure Virtual Network is a construct that allows you to connect virtual
network interface cards to a virtual network to allow TCP/IP -based communications between network enabled
devices. Azure Virtual Machines connected to an Azure Virtual Network can connect to devices on the same Azure
Virtual Network, different Azure Virtual Networks, on the Internet or even on your own on-premises networks.
This article discusses a collection of Azure network security best practices. These best practices are derived from
our experience with Azure networking and the experiences of customers like yourself.
For each best practice, this article explains:
What the best practice is
Why you want to enable that best practice
What might be the result if you fail to enable the best practice
Possible alternatives to the best practice
How you can learn to enable the best practice
This Azure Network Security Best Practices article is based on a consensus opinion, and Azure platform capabilities
and feature sets, as they exist at the time this article was written. Opinions and technologies change over time and
this article will be updated on a regular basis to reflect those changes.
Azure Network security best practices discussed in this article include:
Logically segment subnets
Control routing behavior
Enable Forced Tunneling
Use Virtual network appliances
Deploy DMZs for security zoning
Avoid exposure to the Internet with dedicated WAN links
Optimize uptime and performance
Use global load balancing
Disable RDP Access to Azure Virtual Machines
Enable Azure Security Center
Extend your datacenter into Azure

Logically segment subnets


Azure Virtual Networks are similar to a L AN on your on-premises network. The idea behind an Azure Virtual
Network is that you create a single private IP address space-based network on which you can place all your Azure
Virtual Machines. The private IP address spaces available are in the Class A (10.0.0.0/8), Class B (172.16.0.0/12),
and Class C (192.168.0.0/16) ranges.
Similar to what you do on-premises, you should segment the larger address space into subnets. You can use CIDR
based subnetting principles to create your subnets.
Routing between subnets will happen automatically and you don't need to manually configure routing tables.
However, the default setting is that there are no network access controls between the subnets you create on the
Azure Virtual Network. In order to create network access controls between subnets, you’ll need to put something
between the subnets.
One of the things you can use to accomplish this task is a Network Security Group (NSG ). NSGs are simple
stateful packet inspection devices that use the 5-tuple (the source IP, source port, destination IP, destination port,
and layer 4 protocol) approach to create allow/deny rules for network traffic. You can allow or deny traffic to and
from single IP address, to and from multiple IP addresses or even to and from entire subnets.
Using NSGs for network access control between subnets enables you to put resources that belong to the same
security zone or role in their own subnets. For example, think of a simple 3-tier application that has a web tier, an
application logic tier and a database tier. You put virtual machines that belong to each of these tiers into their own
subnets. Then you use NSGs to control traffic between the subnets:
Web tier virtual machines can only initiate connections to the application logic machines and can only accept
connections from the Internet
Application logic virtual machines can only initiate connections with database tier and can only accept
connections from the web tier
Database tier virtual machines cannot initiate connection with anything outside of their own subnet and can
only accept connections from the application logic tier
To learn more about Network Security Groups and how you can use them to logically segment your Azure Virtual
Networks, see What is a Network Security Group (NSG ).

Control routing behavior


When you put a virtual machine on an Azure Virtual Network, you’ll notice that the virtual machine can connect to
any other virtual machine on the same Azure Virtual Network, even if the other virtual machines are on different
subnets. This is possible because there is a collection of system routes that are enabled by default that allow this
type of communication. These default routes allow virtual machines on the same Azure Virtual Network to initiate
connections with each other, and with the Internet (for outbound communications to the Internet only).
While the default system routes are useful for many deployment scenarios, there are times when you want to
customize the routing configuration for your deployments. These customizations will allow you to configure the
next hop address to reach specific destinations.
We recommend that you configure User-Defined Routes when you deploy a virtual network security appliance,
which is discussed in a later best practice.

NOTE
User Defined Routes are not required and the default system routes works in most instances.

You can learn more about User-Defined Routes and how to configure them by reading the article What are User
Defined Routes and IP Forwarding.

Enable Forced Tunneling


To better understand forced tunneling, it’s useful to understand what “split tunneling” is. The most common
example of split tunneling is seen with VPN connections. Imagine that you establish a VPN connection from your
hotel room to your corporate network. This connection allows you to access corporate resources and all
communications to your corporate network go through the VPN tunnel.
What happens when you want to connect to resources on the Internet? When split tunneling is enabled, those
connections go directly to the Internet and not through the VPN tunnel. Some security experts consider this to be a
potential risk and therefore recommend that split tunneling is disabled and all connections. Connections destined
for the Internet and connections destined for corporate resources should go through the VPN tunnel. The
advantage of doing this is that connections to the Internet are then forced through the corporate network security
devices, which wouldn’t be the case if the VPN client connected to the Internet outside of the VPN tunnel.
Now let’s bring this back to virtual machines on an Azure Virtual Network. The default routes for an Azure Virtual
Network allow virtual machines to initiate traffic to the Internet. This too can represent a security risk, as these
outbound connections could increase the attack surface of a virtual machine and be leveraged by attackers. For this
reason, we recommend that you enable forced tunneling on your virtual machines when you have cross-premises
connectivity between your Azure Virtual Network and your on-premises network. Cross premises connectivity is
discussed later in this Azure networking best practices document.
If you do not have a cross premises connection, make sure you take advantage of Network Security Groups
(discussed earlier) or Azure virtual network security appliances (discussed next) to prevent outbound connections
to the Internet from your Azure Virtual Machines.
To learn more about forced tunneling and how to enable it, see Configure Forced Tunneling using PowerShell and
Azure Resource Manager.

Use virtual network appliances


While Network Security Groups and User-Defined Routing can provide a certain measure of network security at
the network and transport layers of the OSI model, there are going to be situations where you’ll want or need to
enable security at high levels of the stack. In such situations, we recommend that you deploy virtual network
security appliances provided by Azure partners.
Azure network security appliances can deliver enhanced levels of security over what is provided by network level
controls. Some of the network security capabilities provided by virtual network security appliances include:
Firewalling
Intrusion detection/Intrusion Prevention
Vulnerability management
Application control
Network-based anomaly detection
Web filtering
Antivirus
Botnet protection
If you require a higher level of network security than you can obtain with network level access controls, then we
recommend that you investigate and deploy Azure virtual network security appliances.
To learn about what Azure virtual network security appliances are available, and about their capabilities, visit the
Azure Marketplace and search for “security” and “network security.”

Deploy DMZs for security zoning


A DMZ or “perimeter network” is a physical or logical network segment that is designed to provide an additional
layer of security between your assets and the Internet. The intent of the DMZ is to place specialized network access
control devices on the edge of the DMZ network so that only desired traffic is allowed past the network security
device and into your Azure Virtual Network.
DMZs are useful because you can focus your network access control management, monitoring, logging, and
reporting on the devices at the edge of your Azure Virtual Network. Here you would typically enable DDoS
prevention, Intrusion Detection/Intrusion Prevention systems (IDS/IPS ), firewall rules and policies, web filtering,
network antimalware and more. The network security devices sit between the Internet and your Azure Virtual
Network and have an interface on both networks.
While this is the basic design of a DMZ, there are many different DMZ designs, such as back-to-back, tri-homed,
multi-homed, and others.
We recommend for all high security deployments that you consider deploying a DMZ to enhance the level of
network security for your Azure resources.
To learn more about DMZs and how to deploy them in Azure, see Microsoft Cloud Services and Network Security.

Avoid exposure to the Internet with dedicated WAN links


Many organizations have chosen the Hybrid IT route. In hybrid IT, some of the company’s information assets are in
Azure, while others remain on-premises. In many cases,, some components of a service will be running in Azure
while other components remain on-premises.
In the hybrid IT scenario, there is usually some type of cross-premises connectivity. This cross-premises
connectivity allows the company to connect their on-premises networks to Azure Virtual Networks. There are two
cross-premises connectivity solutions available:
Site-to-site VPN
ExpressRoute
Site-to-site VPN represents a virtual private connection between your on-premises network and an Azure Virtual
Network. This connection takes place over the Internet and allows you to “tunnel” information inside an encrypted
link between your network and Azure. Site-to-site VPN is a secure, mature technology that has been deployed by
enterprises of all sizes for decades. Tunnel encryption is performed using IPsec tunnel mode.
While site-to-site VPN is a trusted, reliable, and established technology, traffic within the tunnel does traverse the
Internet. In addition, bandwidth is relatively constrained to a maximum of about 200 Mbps.
If you require an exceptional level of security or performance for your cross-premises connections, we recommend
that you use Azure ExpressRoute for your cross-premises connectivity. ExpressRoute is a dedicated WAN link
between your on-premises location or an Exchange hosting provider. Because this is a telco connection, your data
doesn’t travel over the Internet and therefore is not exposed to the potential risks inherent in Internet
communications.
To learn more about how Azure ExpressRoute works and how to deploy, see ExpressRoute Technical Overview.

Optimize uptime and performance


Confidentiality, integrity, and availability (CIA) comprise the triad of today’s most influential security model.
Confidentiality is about encryption and privacy, integrity is about making sure that data is not changed by
unauthorized personnel, and availability is about making sure that authorized individuals are able to access the
information they are authorized to access. Failure in any one of these areas represents a potential breach in
security.
Availability can be thought of as being about uptime and performance. If a service is down, information can’t be
accessed. If performance is so poor as to make the data unusable, then we can consider the data to be inaccessible.
Therefore, from a security perspective, we need to do whatever we can to make sure our services have optimal
uptime and performance. A popular and effective method used to enhance availability and performance is to use
load balancing. Load balancing is a method of distributing network traffic across servers that are part of a service.
For example, if you have front-end web servers as part of your service, you can use load balancing to distribute the
traffic across your multiple front-end web servers.
This distribution of traffic increases availability because if one of the web servers becomes unavailable, the load
balancer stops sending traffic to that server and redirects it to the servers that are still online. Load balancing also
helps performance, because the processor, network and memory overhead for serving requests is distributed
across all the load balanced servers.
We recommend that you employ load balancing whenever you can, and as appropriate for your services. We’ll
address appropriateness in the following sections: At the Azure Virtual Network level, Azure provides you with
three primary load balancing options:
HTTP -based load balancing
External load balancing
Internal load balancing

HTTP-based Load Balancing


HTTP -based load balancing bases decisions about what server to send connections using characteristics of the
HTTP protocol. Azure has an HTTP load balancer that goes by the name of Application Gateway.
We recommend that you use Azure Application Gateway when:
Applications that require requests from the same user/client session to reach the same back-end virtual
machine. Examples of this would be shopping cart apps and web mail servers.
Applications that want to free web server farms from SSL termination overhead by taking advantage of
Application Gateway’s SSL offload feature.
Applications, such as a content delivery network, that requires multiple HTTP requests on the same long-
running TCP connection to be routed or load balanced to different back-end servers.
To learn more about how Azure Application Gateway works and how you can use it in your deployments, see
Application Gateway Overview.

External Load Balancing


External load balancing takes place when incoming connections from the Internet are load balanced among your
servers located in an Azure Virtual Network. The Azure External Load balancer can provide you this capability and
we recommend that you use it when you don’t require the sticky sessions or SSL offload.
In contrast to HTTP -based load balancing, the External Load Balancer uses information at the network and
transport layers of the OSI networking model to make decisions on what server to load balance connection to.
We recommend that you use External Load Balancing whenever you have stateless applications accepting
incoming requests from the Internet.
To learn more about how the Azure External Load Balancer works and how you can deploy it, see Get Started
Creating an Internet Facing Load Balancer in Resource Manager using PowerShell.

Internal Load Balancing


Internal load balancing is similar to external load balancing and uses the same mechanism to load balance
connections to the servers behind them. The only difference is that the load balancer in this case is accepting
connections from virtual machines that are not on the Internet. In most cases, the connections that are accepted for
load balancing are initiated by devices on an Azure Virtual Network.
We recommend that you use internal load balancing for scenarios that benefit from this capability, such as when
you need to load balance connections to SQL Servers or internal web servers.
To learn more about how Azure Internal Load Balancing works and how you can deploy it, see Get Started
Creating an Internal Load Balancer using PowerShell.

Use global load balancing


Public cloud computing makes it possible to deploy globally distributed applications that have components located
in datacenters all over the world. This is possible on Microsoft Azure due to Azure’s global datacenter presence. In
contrast to the load balancing technologies mentioned earlier, global load balancing makes it possible to make
services available even when entire datacenters might become unavailable.
You can get this type of global load balancing in Azure by taking advantage of Azure Traffic Manager. Traffic
Manager makes is possible to load balance connections to your services based on the location of the user.
For example, if the user is making a request to your service from the EU, the connection is directed to your services
located in an EU datacenter. This part of Traffic Manager global load balancing helps to improve performance
because connecting to the nearest datacenter is faster than connecting to datacenters that are far away.
On the availability side, global load balancing makes sure that your service is available even if an entire datacenter
should become unavailable.
For example, if an Azure datacenter should become unavailable due to environmental reasons or due to outages
(such as regional network failures), connections to your service would be rerouted to the nearest online datacenter.
This global load balancing is accomplished by taking advantage of DNS policies that you can create in Traffic
Manager.
We recommend that you use Traffic Manager for any cloud solution you develop that has a widely distributed
scope across multiple regions and requires the highest level of uptime possible.
To learn more about Azure Traffic Manager and how to deploy it, see What is Traffic Manager.

Disable RDP/SSH Access to Azure Virtual Machines


It is possible to reach Azure Virtual Machines using the Remote Desktop Protocol (RDP ) and the Secure Shell
(SSH) protocols. These protocols make it possible to manage virtual machines from remote locations and are
standard in datacenter computing.
The potential security problem with using these protocols over the Internet is that attackers can use various brute
force techniques to gain access to Azure Virtual Machines. Once the attackers gain access, they can use your virtual
machine as a launch point for compromising other machines on your Azure Virtual Network or even attack
networked devices outside of Azure.
Because of this, we recommend that you disable direct RDP and SSH access to your Azure Virtual Machines from
the Internet. After direct RDP and SSH access from the Internet is disabled, you have other options you can use to
access these virtual machines for remote management:
Point-to-site VPN
Site-to-site VPN
ExpressRoute
Point-to-site VPN is another term for a remote access VPN client/server connection. A point-to-site VPN enables a
single user to connect to an Azure Virtual Network over the Internet. After the point-to-site connection is
established, the user will be able to use RDP or SSH to connect to any virtual machines located on the Azure
Virtual Network that the user connected to via point-to-site VPN. This assumes that the user is authorized to reach
those virtual machines.
Point-to-site VPN is more secure than direct RDP or SSH connections because the user has to authenticate twice
before connecting to a virtual machine. First, the user needs to authenticate (and be authorized) to establish the
point-to-site VPN connection; second, the user needs to authenticate (and be authorized) to establish the RDP or
SSH session.
A site-to-site VPN connects an entire network to another network over the Internet. You can use a site-to-site VPN
to connect your on-premises network to an Azure Virtual Network. If you deploy a site-to-site VPN, users on your
on-premises network will be able to connect to virtual machines on your Azure Virtual Network by using the RDP
or SSH protocol over the site-to-site VPN connection and does not require you to allow direct RDP or SSH access
over the Internet.
You can also use a dedicated WAN link to provide functionality similar to the site-to-site VPN. The main differences
are 1. the dedicated WAN link doesn’t traverse the Internet, and 2. dedicated WAN links are typically more stable
and performant. Azure provides you a dedicated WAN link solution in the form of ExpressRoute.

Enable Azure Security Center


Azure Security Center helps you prevent, detect, and respond to threats, and provides you increased visibility into,
and control over, the security of your Azure resources. It provides integrated security monitoring and policy
management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works
with a broad ecosystem of security solutions.
Azure Security Center helps you optimize and monitor network security by:
Providing network security recommendations
Monitoring the state of your network security configuration
Alerting you to network based threats both at the endpoint and network levels
We highly recommend that you enable Azure Security Center for all of your Azure deployments.
To learn more about Azure Security Center and how to enable it for your deployments, see Introduction to Azure
Security Center.

Securely extend your datacenter into Azure


Many enterprise IT organizations are looking to expand into the cloud instead of growing their on-premises
datacenters. This expansion represents an extension of existing IT infrastructure into the public cloud. By taking
advantage of cross-premises connectivity options, it’s possible to treat your Azure Virtual Networks as just another
subnet on your on-premises network infrastructure.
However, there are planning and design issues that need to be addressed first. This is especially important in the
area of network security. One of the best ways to understand how you approach such a design is to see an example.
Microsoft has created the Datacenter Extension Reference Architecture Diagram and supporting collateral to help
you understand what such a datacenter extension would look like. This provides an example reference
implementation that you can use to plan and design a secure enterprise datacenter extension to the cloud. We
recommend that you review this document to get an idea of the key components of a secure solution.
To learn more about how to securely extend your datacenter into Azure, view the video Extending Your Datacenter
to Microsoft Azure.
Using load-balancing services in Azure
2/16/2018 • 10 minutes to read • Edit Online

Introduction
Microsoft Azure provides multiple services for managing how network traffic is distributed and load balanced. You
can use these services individually or combine their methods, depending on your needs, to build the optimal
solution.
In this tutorial, we first define a customer use case and see how it can be made more robust and performant by
using the following Azure load-balancing portfolio: Traffic Manager, Application Gateway, and Load Balancer. We
then provide step-by-step instructions for creating a deployment that is geographically redundant, distributes
traffic to VMs, and helps you manage different types of requests.
At a conceptual level, each of these services plays a distinct role in the load-balancing hierarchy.
Traffic Manager provides global DNS load balancing. It looks at incoming DNS requests and responds
with a healthy endpoint, in accordance with the routing policy the customer has selected. Options for routing
methods are:
Performance routing to send the requestor to the closest endpoint in terms of latency.
Priority routing to direct all traffic to an endpoint, with other endpoints as backup.
Weighted round-robin routing, which distributes traffic based on the weighting that is assigned to each
endpoint.
The client connects directly to that endpoint. Azure Traffic Manager detects when an endpoint is unhealthy
and then redirects the clients to another healthy instance. Refer to Azure Traffic Manager documentation to
learn more about the service.
Application Gateway provides application delivery controller (ADC ) as a service, offering various Layer 7
load-balancing capabilities for your application. It allows customers to optimize web farm productivity by
offloading CPU -intensive SSL termination to the application gateway. Other Layer 7 routing capabilities include
round-robin distribution of incoming traffic, cookie-based session affinity, URL path-based routing, and the
ability to host multiple websites behind a single application gateway. Application Gateway can be configured as
an Internet-facing gateway, an internal-only gateway, or a combination of both. Application Gateway is fully
Azure managed, scalable, and highly available. It provides a rich set of diagnostics and logging capabilities for
better manageability.
Load Balancer is an integral part of the Azure SDN stack, providing high-performance, low -latency Layer 4
load-balancing services for all UDP and TCP protocols. It manages inbound and outbound connections. You can
configure public and internal load-balanced endpoints and define rules to map inbound connections to back-
end pool destinations by using TCP and HTTP health-probing options to manage service availability.

Scenario
In this example scenario, we use a simple website that serves two types of content: images and dynamically
rendered webpages. The website must be geographically redundant, and it should serve its users from the closest
(lowest latency) location to them. The application developer has decided that any URLs that match the pattern
/images/* are served from a dedicated pool of VMs that are different from the rest of the web farm.
Additionally, the default VM pool serving the dynamic content needs to talk to a back-end database that is hosted
on a high-availability cluster. The entire deployment is set up through Azure Resource Manager.
Using Traffic Manager, Application Gateway, and Load Balancer allows this website to achieve these design goals:
Multi-geo redundancy: If one region goes down, Traffic Manager routes traffic seamlessly to the closest
region without any intervention from the application owner.
Reduced latency: Because Traffic Manager automatically directs the customer to the closest region, the
customer experiences lower latency when requesting the webpage contents.
Independent scalability: Because the web application workload is separated by type of content, the
application owner can scale the request workloads independent of each other. Application Gateway ensures that
the traffic is routed to the right pools based on the specified rules and the health of the application.
Internal load balancing: Because Load Balancer is in front of the high-availability cluster, only the active and
healthy endpoint for a database is exposed to the application. Additionally, a database administrator can
optimize the workload by distributing active and passive replicas across the cluster independent of the front-end
application. Load Balancer delivers connections to the high-availability cluster and ensures that only healthy
databases receive connection requests.
The following diagram shows the architecture of this scenario:

NOTE
This example is only one of many possible configurations of the load-balancing services that Azure offers. Traffic Manager,
Application Gateway, and Load Balancer can be mixed and matched to best suit your load-balancing needs. For example, if
SSL offload or Layer 7 processing is not necessary, Load Balancer can be used in place of Application Gateway.

Setting up the load-balancing stack


Step 1: Create a Traffic Manager profile
1. In the Azure portal, click Create a resource > Networking > Traffic Manager profile > Create.
2. Enter the following basic information:
Name: Give your Traffic Manager profile a DNS prefix name.
Routing method: Select the traffic-routing method policy. For more information about the methods, see
About Traffic Manager traffic routing methods.
Subscription: Select the subscription that contains the profile.
Resource group: Select the resource group that contains the profile. It can be a new or existing resource
group.
Resource group location: Traffic Manager service is global and not bound to a location. However, you
must specify a region for the group where the metadata associated with the Traffic Manager profile
resides. This location has no impact on the runtime availability of the profile.
3. Click Create to generate the Traffic Manager profile.

Step 2: Create the application gateways


1. In the Azure portal, in the left pane, click Create a resource > Networking > Application Gateway.
2. Enter the following basic information about the application gateway:
Name: The name of the application gateway.
SKU size: The size of the application gateway, available as Small, Medium, or Large.
Instance count: The number of instances, a value from 2 through 10.
Resource group: The resource group that holds the application gateway. It can be an existing resource
group or a new one.
Location: The region for the application gateway, which is the same location as the resource group. The
location is important, because the virtual network and public IP must be in the same location as the
gateway.
3. Click OK.
4. Define the virtual network, subnet, front-end IP, and listener configurations for the application gateway. In this
scenario, the front-end IP address is Public, which allows it to be added as an endpoint to the Traffic Manager
profile later on.
5. Configure the listener with one of the following options:
If you use HTTP, there is nothing to configure. Click OK.
If you use HTTPS, further configuration is required. Refer to Create an application gateway, starting at
step 9. When you have completed the configuration, click OK.
Configure URL routing for application gateways
When you choose a back-end pool, an application gateway that's configured with a path-based rule takes a path
pattern of the request URL in addition to round-robin distribution. In this scenario, we are adding a path-based rule
to direct any URL with "/images/*" to the image server pool. For more information about configuring URL path-
based routing for an application gateway, refer to Create a path-based rule for an application gateway.

1. From your resource group, go to the instance of the application gateway that you created in the preceding
section.
2. Under Settings, select Backend pools, and then select Add to add the VMs that you want to associate with the
web-tier back-end pools.
3. Enter the name of the back-end pool and all the IP addresses of the machines that reside in the pool. In this
scenario, we are connecting two back-end server pools of virtual machines.

4. Under Settings of the application gateway, select Rules, and then click the Path based button to add a rule.
5. Configure the rule by providing the following information.
Basic settings:
Name: The friendly name of the rule that is accessible in the portal.
Listener: The listener that is used for the rule.
Default backend pool: The back-end pool to be used with the default rule.
Default HTTP settings: The HTTP settings to be used with the default rule.
Path-based rules:
Name: The friendly name of the path-based rule.
Paths: The path rule that is used for forwarding traffic.
Backend Pool: The back-end pool to be used with this rule.
HTTP Setting: The HTTP settings to be used with this rule.

IMPORTANT
Paths: Valid paths must start with "/". The wildcard "*" is allowed only at the end. Valid examples are /xyz, /xyz*, or
/xyz/*.
Step 3: Add application gateways to the Traffic Manager endpoints
In this scenario, Traffic Manager is connected to application gateways (as configured in the preceding steps) that
reside in different regions. Now that the application gateways are configured, the next step is to connect them to
your Traffic Manager profile.
1. Open your Traffic Manager profile. To do so, look in your resource group or search for the name of the Traffic
Manager profile from All Resources.
2. In the left pane, select Endpoints, and then click Add to add an endpoint.

3. Create an endpoint by entering the following information:


Type: Select the type of endpoint to load-balance. In this scenario, select Azure endpoint because we
are connecting it to the application gateway instances that were configured previously.
Name: Enter the name of the endpoint.
Target resource type: Select Public IP address and then, under Target resource, select the public IP
of the application gateway that was configured previously.
4. Now you can test your setup by accessing it with the DNS of your Traffic Manager profile (in this example:
TrafficManagerScenario.trafficmanager.net). You can resend requests, bring up or bring down VMs and web
servers that were created in different regions, and change the Traffic Manager profile settings to test your
setup.
Step 4: Create a load balancer
In this scenario, Load Balancer distributes connections from the web tier to the databases within a high-availability
cluster.
If your high-availability database cluster is using SQL Server AlwaysOn, refer to Configure one or more Always
On Availability Group Listeners for step-by-step instructions.
For more information about configuring an internal load balancer, see Create an Internal load balancer in the Azure
portal.
1. In the Azure portal, in the left pane, click Create a resource > Networking > Load balancer.
2. Choose a name for your load balancer.
3. Set the Type to Internal, and choose the appropriate virtual network and subnet for the load balancer to reside
in.
4. Under IP address assignment, select either Dynamic or Static.
5. Under Resource group, choose the resource group for the load balancer.
6. Under Location, choose the appropriate region for the load balancer.
7. Click Create to generate the load balancer.
Connect a back-end database tier to the load balancer
1. From your resource group, find the load balancer that was created in the previous steps.
2. Under Settings, click Backend pools, and then click Add to add a back-end pool.
3. Enter the name of the back-end pool.
4. Add either individual machines or an availability set to the back-end pool.
Configure a probe
1. In your load balancer, under Settings, select Probes, and then click Add to add a probe.

2. Enter the name for the probe.


3. Select the Protocol for the probe. For a database, you might want a TCP probe rather than an HTTP probe. To
learn more about load-balancer probes, refer to Understand load balancer probes.
4. Enter the Port of your database to be used for accessing the probe.
5. Under Interval, specify how frequently to probe the application.
6. Under Unhealthy threshold, specify the number of continuous probe failures that must occur for the back-end
VM to be considered unhealthy.
7. Click OK to create the probe.
Configure the load-balancing rules
1. Under Settings of your load balancer, select Load balancing rules, and then click Add to create a rule.
2. Enter the Name for the load-balancing rule.
3. Choose the Frontend IP Address of the load balancer, Protocol, and Port.
4. Under Backend port, specify the port to be used in the back-end pool.
5. Select the Backend pool and the Probe that were created in the previous steps to apply the rule to.
6. Under Session persistence, choose how you want the sessions to persist.
7. Under Idle timeouts, specify the number of minutes before an idle timeout.
8. Under Floating IP, select either Disabled or Enabled.
9. Click OK to create the rule.
Step 5: Connect web-tier VMs to the load balancer
Now we configure the IP address and load-balancer front-end port in the applications that are running on your
web-tier VMs for any database connections. This configuration is specific to the applications that run on these VMs.
To configure the destination IP address and port, refer to the application documentation. To find the IP address of
the front end, in the Azure portal, go to the front-end IP pool on the Load balancer settings.

Next steps
Overview of Traffic Manager
Application Gateway overview
Azure Load Balancer overview
Disaster recovery using Azure DNS and Traffic
Manager
6/8/2018 • 10 minutes to read • Edit Online

Disaster recovery focuses on recovering from a severe loss of application functionality. In order to choose a
disaster recovery solution, business and technology owners must first determine the level of functionality that is
required during a disaster, such as - unavailable, partially available via reduced functionality, or delayed availability,
or fully available. Most enterprise customers are choosing a multi-region architecture for resiliency against an
application or infrastructure level failover. Customers can choose several approaches in the quest to achieve
failover and high availability via redundant architecture. Here are some of the popular approaches:
Active-passive with cold standby: In this failover solution, the VMs and other appliances that running in
the standby region are not active until there is a need for failover. However, the production environment is
replicated in the form of backups, VM images, or Resource Manager templates, to a different region. This
failover mechanism is cost-effective but takes a longer time to undertake a complete failover.

Figure - Active/Passive with cold standby disaster recovery configuration


Active/Passive with pilot light: In this failover solution, the standby environment is set up with a minimal
configuration. The setup has only the necessary services running to support only a minimal and critical set
of applications. In its native form, this scenario can only execute minimal functionality but can scale up and
spawn additional services to take bulk of the production load if a failover occurs.
Figure: Active/Passive with pilot light disaster recovery configuration
Active/Passive with warm standby: In this failover solution, the standby region is pre-warmed and is
ready to take the base load, auto scaling is turned on, and all the instances are up and running. This solution
is not scaled to take the full production load but is functional, and all services are up and running. This
solution is an augmented version of the pilot light approach.

Figure: Active/Passive with warm standby disaster recovery configuration


To learn more about failover and high availability, see Disaster Recovery for Azure Applications.

Planning your disaster recovery architecture


There are two technical aspects towards setting up your disaster recovery architecture:
Using a deployment mechanism to replicate instances, data, and configurations between primary and standby
environments. This type of disaster recovery can be done natively via Azure Site-Recovery via Microsoft Azure
partner appliances/services like Veritas or NetApp.
Developing a solution to divert network/web traffic from the primary site to the standby site. This type of
disaster recovery can be achieved via Azure DNS, Azure Traffic Manager(DNS ), or third-party global load
balancers.
This article is limited to approaches via Network and Web traffic redirection. For instructions to set up Azure Site
Recovery, see Azure Site Recovery Documentation. DNS is one of the most efficient mechanisms to divert network
traffic because DNS is often global and external to the data center and is insulated from any regional or availability
zone (AZ ) level failures. One can use a DNS -based failover mechanism and in Azure, two DNS services can
accomplish the same in some fashion - Azure DNS (authoritative DNS ) and Azure Traffic Manager (DNS -based
smart traffic routing).
It is important to understand few concepts in DNS that are extensively used to discuss the solutions provided in
this article:
DNS A Record – A Records are pointers that point a domain to a IPv4 address.
CNAME or Canonical name - This record type is used to point to another DNS record. CNAME doesn’t
respond with an IP response but rather the pointer to the record that contains the IP address.
Weighted Routing – one can choose to associate a weight to service endpoints and then distribute the traffic
based on the assigned weights. This routing method is one of the four traffic routing mechanisms available
within Traffic Manager. For more information, see Weighted routing method.
Priority Routing – Priority routing is based on health checks of endpoints. By default, Azure Traffic manager
sends all traffic to the highest priority endpoint, and upon a failure or disaster, Traffic Manager routes the traffic
to the secondary endpoint. For more information, see Priority routing method.

Manual failover using Azure DNS


The Azure DNS manual failover solution for disaster recovery uses the standard DNS mechanism to failover to the
backup site. The manual option via Azure DNS works best when used in conjunction with the cold standby or the
pilot light approach.
Figure - Manual failover using Azure DNS
The assumptions made for the solution are:
Both primary and secondary endpoints have static IPs that don’t change often. Say for the primary site the IP is
100.168.124.44 and the IP for the secondary site is 100.168.124.43.
An Azure DNS zone exists for both the primary and secondary site. Say for the primary site the endpoint is
prod.contoso.com and for the backup site is dr.contoso.com. A DNS record for the main application known as
www.contoso.com also exists.
The TTL is at or below the RTO SL A set in the organization. For example, if an enterprise sets the RTO of the
application disaster response to be 60 mins, then the TTL should be less than 60 mins, preferably the lower the
better. You can set up Azure DNS for manual failover as follows:
Create a DNS zone
Create DNS zone records
Update CNAME record
Step 1: Create a DNS
Create a DNS zone (for example, www.contoso.com) as shown below:
Figure - Create a DNS zone in Azure
Step 2: Create DNS zone records
Within this zone create three records (for example - www.contoso.com, prod.contoso.com and dr.consoto.com) as
show below.

Figure - Create DNS zone records in Azure


In this scenario, site, www.contoso.com has a TTL of 30 mins, which is well below the stated RTO, and is pointing to
the production site prod.contoso.com. This configuration is during normal business operations. The TTL of
prod.contoso.com and dr.contoso.com has been set to 300 seconds or 5 mins. You can use an Azure monitoring
service such as Azure Monitor or Azure App Insights, or, any partner monitoring solutions such as Dynatrace, You
can even use home grown solutions that can monitor or detect application or virtual infrastructure level failures.
Step 3: Update the CNAME record
Once failure is detected, change the record value to point to dr.contoso.com as shown below:

Figure - Update the CNAME record in Azure


Within 30 minutes, during which most resolvers will refresh the cached zone file, any query to www.contoso.com
will be redirected to dr.contoso.com. You can also run the following Azure CLI command to change the CNAME
value:

az network dns record-set cname set-record \


--resource-group 123 \
--zone-name contoso.com \
--record-set-name www \
--cname dr.contoso.com

This step can be executed manually or via automation. It can be done manually via the console or by the Azure CLI.
The Azure SDK and API can be used to automate the CNAME update so that no manual intervention is required.
Automation can be built via Azure functions or within a third-party monitoring application or even from on-
premises.
How manual failover works using Azure DNS
Since the DNS server is outside the failover or disaster zone, it is insulated against any downtime. This enables
user to architect a simple failover scenario that is cost effective and will work all the time assuming that the
operator has network connectivity during disaster and can make the flip. If the solution is scripted, then one must
ensure that the server or service running the script should be insulated against the problem affecting the
production environment. Also, keep in mind the low TTL that was set against the zone so that no resolver around
the world keeps the endpoint cached for long and customers can access the site within the RTO. For a cold standby
and pilot light, since some prewarming and other administrative activity may be required – one should also give
enough time before making the flip.

Automatic failover using Azure Traffic Manager


When you have complex architectures and multiple sets of resources capable of performing the same function, you
can configure Azure Traffic Manager (based on DNS ) to check the health of your resources and route the traffic
from the non-healthy resource to the healthy resource. In the following example, both the primary region and the
secondary region have a full deployment. This deployment includes the cloud services and a synchronized
database.
Figure - Automatic failover using Azure Traffic Manager
However, only the primary region is actively handling network requests from the users. The secondary region
becomes active only when the primary region experiences a service disruption. In that case, all new network
requests route to the secondary region. Since the backup of the database is near instantaneous, both the load
balancers have IPs that can be health checked, and the instances are always up and running, this topology provides
an option for going in for a low RTO and failover without any manual intervention. The secondary failover region
must be ready to go-live immediately after failure of the primary region. This scenario is ideal for the use of Azure
Traffic Manager that has inbuilt probes for various types of health checks including http / https and TCP. Azure
Traffic manager also has a rule engine that can be configured to failover when a failure occurs as described below.
Let’s consider the following solution using Traffic Manager:
Customer has the Region #1 endpoint known as prod.contoso.com with a static IP as 100.168.124.44 and a
Region #2 endpoint known as dr.contoso.com with a static IP as 100.168.124.43.
Each of these environments is fronted via a public facing property like a load balancer. The load balancer can be
configured to have a DNS -based endpoint or a fully qualified domain name (FQDN ) as shown above.
All the instances in Region 2 are in near real-time replication with Region 1. Furthermore, the machine images
are up-to-date, and all software/configuration data is patched and are in line with Region 1.
Autoscaling is preconfigured in advance.
The steps taken to configure the failover with Azure Traffic Manager are as follows:
1. Create a new Azure Traffic Manager profile
2. Create endpoints within the Traffic Manager profile
3. Set up health check and failover configuration
Step 1: Create a new Azure Traffic Manager profile
Create a new Azure Traffic manager profile with the name contoso123 and select the Routing method as Priority. If
you have a pre-existing resource group that you want to associate with, then you can select an existing resource
group, otherwise, create a new resource group.

Figure - Create a Traffic Manager profile


Step 2: Create endpoints within the Traffic Manager profile
In this step, you create endpoints that point to the production and disaster recovery sites. Here, choose the Type as
an external endpoint, but if the resource is hosted in Azure, then you can choose Azure endpoint as well. If you
choose Azure endpoint, then select a Target resource that is either an App Service or a Public IP that is
allocated by Azure. The priority is set as 1 since it is the primary service for Region 1. Similarly, create the disaster
recovery endpoint within Traffic Manager as well.
Figure - Create disaster recovery endpoints
Step 3: Set up health check and failover configuration
In this step, you set the DNS TTL to 10 seconds, which is honored by most internet-facing recursive resolvers. This
configuration means that no DNS resolver will cache the information for more than 10 seconds. For the endpoint
monitor settings, the path is current set at / or root, but you can customize the endpoint settings to evaluate a path,
for example, prod.contoso.com/index. The example below shows the https as the probing protocol. However, you
can choose http or tcp as well. The choice of protocol depends upon the end application. The probing interval is
set to 10 seconds, which enables fast probing, and the retry is set to 3. As a result, Traffic Manager will failover to
the second endpoint if three consecutive intervals register a failure. The following formula defines the total time for
an automated failover: Time for failover = TTL + Retry * Probing interval And in this case, the value is 10 + 3 * 10
= 40 seconds (Max). If the Retry is set to 1 and TTL is set to 10 secs, then the time for failover 10 + 1 * 10 = 20
seconds. Set the Retry to a value greater than 1 to eliminate chances of failovers due to false positives or any minor
network blips.

Figure - Set up health check and failover configuration


How automatic failover works using Traffic Manager
During a disaster, the primary endpoint gets probed and the status changes to degraded and the disaster recovery
site remains Online. By default, Traffic Manager sends all traffic to the primary (highest-priority) endpoint. If the
primary endpoint appears degraded, Traffic Manager routes the traffic to the second endpoint as long as it remains
healthy. One has the option to configure more endpoints within Traffic Manager that can serve as additional
failover endpoints, or, as load balancers sharing the load between endpoints.

Next steps
Learn more about Azure Traffic Manager.
Learn more about Azure DNS.
Plan virtual networks
5/30/2018 • 10 minutes to read • Edit Online

Creating a virtual network to experiment with is easy enough, but chances are, you will deploy multiple virtual
networks over time to support the production needs of your organization. With some planning, you will be able to
deploy virtual networks and connect the resources you need more effectively. The information in this article is most
helpful if you're already familiar with virtual networks and have some experience working with them. If you are not
familiar with virtual networks, it's recommended that you read Virtual network overview.

Naming
All Azure resources have a name. The name must be unique within a scope, that may vary for each resource type.
For example, the name of a virtual network must be unique within a resource group, but can be duplicated within a
subscription or Azure region. Defining a naming convention that you can use consistently when naming resources
is helpful when managing several network resources over time. For suggestions, see Naming conventions.

Regions
All Azure resources are created in an Azure region and subscription. A resource can only be created in a virtual
network that exists in the same region and subscription as the resource. You can however, connect virtual networks
that exist in different subscriptions and regions. For more information, see connectivity. When deciding which
region(s) to deploy resources in, consider where consumers of the resources are physically located:
Consumers of resources typically want the lowest network latency to their resources. To determine relative
latencies between a specified location and Azure regions, see View relative latencies.
Do you have data residency, sovereignty, compliance, or resiliency requirements? If so, choosing the region that
aligns to the requirements is critical. For more information, see Azure geographies.
Do you require resiliency across Azure Availability Zones within the same Azure region for the resources you
deploy? You can deploy resources, such as virtual machines (VM ) to different availability zones within the same
virtual network. Not all Azure regions support availability zones however. To learn more about availability zones
and the regions that support them, see Availability zones.

Subscriptions
You can deploy as many virtual networks as required within each subscription, up to the limit. Some organizations
have different subscriptions for different departments, for example. For more information and considerations
around subscriptions, see Subscription governance.

Segmentation
You can create multiple virtual networks per subscription and per region. You can create multiple subnets within
each virtual network. The considerations that follow help you determine how many virtual networks and subnets
you require:
Virtual networks
A virtual network is a virtual, isolated portion of the Azure public network. Each virtual network is dedicated to
your subscription. Things to consider when deciding whether to create one virtual network, or multiple virtual
networks in a subscription:
Do any organizational security requirements exist for isolating traffic into separate virtual networks? You can
choose to connect virtual networks or not. If you connect virtual networks, you can implement a network virtual
appliance, such as a firewall, to control the flow of traffic between the virtual networks. For more information,
see security and connectivity.
Do any organizational requirements exist for isolating virtual networks into separate subscriptions or regions?
A network interface enables a VM to communicate with other resources. Each network interface has one or
more private IP addresses assigned to it. How many network interfaces and private IP addresses do you require
in a virtual network? There are limits to the number of network interfaces and private IP addresses that you can
have within a virtual network.
Do you want to connect the virtual network to another virtual network or on-premises network? You may
choose to connect some virtual networks to each other or on-premises networks, but not others. For more
information, see connectivity. Each virtual network that you connect to another virtual network, or on-premises
network, must have a unique address space. Each virtual network has one or more public or private address
ranges assigned to its address space. An address range is specified in classless internet domain routing (CIDR )
format, such as 10.0.0.0/16. Learn more about address ranges for virtual networks.
Do you have any organizational administration requirements for resources in different virtual networks? If so,
you might separate resources into separate virtual network to simplify permission assignment to individuals in
your organization or to assign different policies to different virtual networks.
When you deploy some Azure service resources into a virtual network, they create their own virtual network. To
determine whether an Azure service creates its own virtual network, see information for each Azure service that
can be deployed into a virtual network.
Subnets
A virtual network can be segmented into one or more subnets up to the limits. Things to consider when deciding
whether to create one subnet, or multiple virtual networks in a subscription:
Each subnet must have a unique address range, specified in CIDR format, within the address space of the virtual
network. The address range cannot overlap with other subnets in the virtual network.
If you plan to deploy some Azure service resources into a virtual network, they may require, or create, their own
subnet, so there must be enough unallocated space for them to do so. To determine whether an Azure service
creates its own subnet, see information for each Azure service that can be deployed into a virtual network. For
example, if you connect a virtual network to an on-premises network using an Azure VPN Gateway, the virtual
network must have a dedicated subnet for the gateway. Learn more about gateway subnets.
Azure routes network traffic between all subnets in a virtual network, by default. You can override Azure's
default routing to prevent Azure routing between subnets, or to route traffic between subnets through a
network virtual appliance, for example. If you require that traffic between resources in the same virtual network
flow through a network virtual appliance (NVA), deploy the resources to different subnets. Learn more in
security.
You can limit access to Azure resources such as an Azure storage account or Azure SQL database, to specific
subnets with a virtual network service endpoint. Further, you can deny access to the resources from the internet.
You may create multiple subnets, and enable a service endpoint for some subnets, but not others. Learn more
about service endpoints, and the Azure resources you can enable them for.
You can associate zero or one network security group to each subnet in a virtual network. You can associate the
same, or a different, network security group to each subnet. Each network security group contains rules, which
allow or deny traffic to and from sources and destinations. Learn more about network security groups.

Security
You can filter network traffic to and from resources in a virtual network using network security groups and network
virtual appliances. You can control how Azure routes traffic from subnets. You can also limit who in your
organization can work with resources in virtual networks.
Traffic filtering
You can filter network traffic between resources in a virtual network using a network security group, an NVA
that filters network traffic, or both. To deploy an NVA, such as a firewall, to filter network traffic, see the Azure
Marketplace. When using an NVA, you also create custom routes to route traffic from subnets to the NVA.
Learn more about traffic routing.
A network security group contains several default security rules that allow or deny traffic to or from resources.
A network security group can be associated to a network interface, the subnet the network interface is in, or
both. To simplify management of security rules, it's recommended that you associate a network security group
to individual subnets, rather than individual network interfaces within the subnet, whenever possible.
If different VMs within a subnet need different security rules applied to them, you can associate the network
interface in the VM to one or more application security groups. A security rule can specify an application
security group in its source, destination, or both. That rule then only applies to the network interfaces that are
members of the application security group. Learn more about network security groups and application security
groups.
Azure creates several default security rules within each network security group. One default rule allows all traffic
to flow between all resources in a virtual network. To override this behavior, use network security groups,
custom routing to route traffic to an NVA, or both. It's recommended that you familiarize yourself with all of
Azure's default security rules and understand how network security group rules are applied to a resource.
You can view sample designs for implementing a DMZ between Azure and the internet using an NVA or network
security groups.
Traffic routing
Azure creates several default routes for outbound traffic from a subnet. You can override Azure's default routing by
creating a route table and associating it to a subnet. Common reasons for overriding Azure's default routing are:
Because you want traffic between subnets to flow through an NVA. To learn more about how to configure route
tables to force traffic through an NVA.
Because you want to force all internet-bound traffic through an NVA, or on-premises, through an Azure VPN
gateway. Forcing internet traffic on-premises for inspection and logging is often referred to as forced tunneling.
Learn more about how to configure forced tunneling.
If you need to implement custom routing, it's recommended that you familiarize yourself with routing in Azure.

Connectivity
You can connect a virtual network to other virtual networks using virtual network peering, or to your on-premises
network, using an Azure VPN gateway.
Peering
When using virtual network peering, the virtual networks can be in the same, or different, supported Azure
regions. The virtual networks can be in the same, or different Azure subscriptions, as long as both subscriptions are
assigned to the same Azure Active Directory tenant. Before creating a peering, it's recommended that you
familiarize yourself with all of the peering requirements and constraints. Bandwidth between resources in virtual
networks peered in the same region is the same as if the resources were in the same virtual network.
VPN gateway
You can use an Azure VPN Gateway to connect a virtual network to your on-premises network using a site-to-site
VPN, or using a dedicated connection with Azure ExpressRoute.
You can combine peering and a VPN gateway to create hub and spoke networks, where spoke virtual networks
connect to a hub virtual network, and the hub connects to an on-premises network, for example.
Name resolution
Resources in one virtual network cannot resolve the names of resources in a peered virtual network using Azure's
built-in DNS. To resolve names in a peered virtual network, deploy your own DNS server, or use Azure DNS
private domains. Resolving names between resources in a virtual network and on-premises networks also requires
you to deploy your own DNS server.

Permissions
Azure utilizes role based access control (RBAC ) to resources. Permissions are assigned to a scope in the following
hierarchy: Subscription, management group, resource group, and individual resource. To learn more about the
hierarchy, see Organize your resources. To work with Azure virtual networks and all of their related capabilities
such as peering, network security groups, service endpoints, and route tables, you can assign members of your
organization to the built-in Owner, Contributor, or Network contributor roles, and then assign the role to the
appropriate scope. If you want to assign specific permissions for a subset of virtual network capabilities, create a
custom role and assign the specific permissions required for virtual networks, subnets and service endpoints,
network interfaces, peering, network and application security groups, or route tables to the role.

Policy
Azure Policy enables you to create, assign, and manage policy definitions. Policy definitions enforce different rules
over your resources, so the resources stay compliant with your organizational standards and service level
agreements. Azure Policy runs an evaluation of your resources, scanning for resources that are not compliant with
the policy definitions you have. For example, you can define and apply a policy that allows creation of virtual
networks in only a specific resource group or region. Another policy can require that every subnet has a network
security group associated to it. The policies are then evaluated when creating and updating resources.
Policies are applied to the following hierarchy: Subscription, management group, and resource group. Learn more
about Azure policy or deploy some virtual network policy template samples.

Next steps
Learn about all tasks, settings, and options for a virtual network, subnet and service endpoint, network interface,
peering, network and application security group, or route table.
Planning and design for VPN Gateway
8/15/2017 • 8 minutes to read • Edit Online

Planning and designing your cross-premises and VNet-to-VNet configurations can be either simple, or
complicated, depending on your networking needs. This article walks you through basic planning and design
considerations.

Planning
Cross-premises connectivity options
If you want to connect your on-premises sites securely to a virtual network, you have three different ways to do so:
Site-to-Site, Point-to-Site, and ExpressRoute. Compare the different cross-premises connections that are available.
The option you choose can depend on various considerations, such as:
What kind of throughput does your solution require?
Do you want to communicate over the public Internet via secure VPN, or over a private connection?
Do you have a public IP address available to use?
Are you planning to use a VPN device? If so, is it compatible?
Are you connecting just a few computers, or do you want a persistent connection for your site?
What type of VPN gateway is required for the solution you want to create?
Which gateway SKU should you use?
Planning table
The following table can help you decide the best connectivity option for your solution.

POINT-TO-SITE SITE-TO-SITE EXPRESSROUTE

Azure Supported Services Cloud Services and Virtual Cloud Services and Virtual Services list
Machines Machines

Typical Bandwidths Based on the gateway SKU Typically < 1 Gbps 50 Mbps, 100 Mbps, 200
aggregate Mbps, 500 Mbps, 1 Gbps, 2
Gbps, 5 Gbps, 10 Gbps

Protocols Supported Secure Sockets Tunneling IPsec Direct connection over


Protocol (SSTP) and IPsec VLANs, NSP's VPN
technologies (MPLS, VPLS,...)

Routing RouteBased (dynamic) We support PolicyBased BGP


(static routing) and
RouteBased (dynamic
routing VPN)

Connection resiliency active-passive active-passive or active- active-active


active

Typical use case Prototyping, dev / test / lab Dev / test / lab scenarios Access to all Azure services
scenarios for cloud services and small scale production (validated list), Enterprise-
and virtual machines workloads for cloud services class and mission critical
and virtual machines workloads, Backup, Big Data,
Azure as a DR site
POINT-TO-SITE SITE-TO-SITE EXPRESSROUTE

SLA SLA SLA SLA

Pricing Pricing Pricing Pricing

Technical Documentation VPN Gateway VPN Gateway ExpressRoute


Documentation Documentation Documentation

FAQ VPN Gateway FAQ VPN Gateway FAQ ExpressRoute FAQ

Gateway SKUs
S2S/VNET-TO-VNET P2S AGGREGATE
SKU TUNNELS CONNECTIONS THROUGHPUT BENCHMARK

VpnGw1 Max. 30 Max. 128* 650 Mbps

VpnGw2 Max. 30 Max. 128* 1 Gbps

VpnGw3 Max. 30 Max. 128* 1.25 Gbps

Basic Max. 10 Max. 128 100 Mbps

*Contact support if additional connections are needed


Aggregate Throughput Benchmark is based on measurements of multiple tunnels aggregated through a
single gateway. It is not a guaranteed throughput due to Internet traffic conditions and your application
behaviors.
Pricing information can be found on the Pricing page.
SL A (Service Level Agreement) information can be found on the SL A page.
VpnGw1, VpnGw2, and VpnGw3 are supported for VPN gateways using the Resource Manager
deployment model only.
Workflow
The following list outlines the common workflow for cloud connectivity:
1. Design and plan your connectivity topology and list the address spaces for all networks you want to connect.
2. Create an Azure virtual network.
3. Create a VPN gateway for the virtual network.
4. Create and configure connections to on-premises networks or other virtual networks (as needed).
5. Create and configure a Point-to-Site connection for your Azure VPN gateway (as needed).

Design
Connection topologies
Start by looking at the diagrams in the About VPN Gateway article. The article contains basic diagrams, the
deployment models for each topology, and the available deployment tools you can use to deploy your
configuration.
Design basics
The following sections discuss the VPN gateway basics.
Networking services limits
Scroll through the tables to view networking services limits. The limits listed may impact your design.
About subnets
When you are creating connections, you must consider your subnet ranges. You cannot have overlapping subnet
address ranges. An overlapping subnet is when one virtual network or on-premises location contains the same
address space that the other location contains. This means that you need your network engineers for your local on-
premises networks to carve out a range for you to use for your Azure IP addressing space/subnets. You need
address space that is not being used on the local on-premises network.
Avoiding overlapping subnets is also important when you are working with VNet-to-VNet connections. If your
subnets overlap and an IP address exists in both the sending and destination VNets, VNet-to-VNet connections
fail. Azure can't route the data to the other VNet because the destination address is part of the sending VNet.
VPN Gateways require a specific subnet called a gateway subnet. All gateway subnets must be named
GatewaySubnet to work properly. Be sure not to name your gateway subnet a different name, and don't deploy
VMs or anything else to the gateway subnet. See Gateway Subnets.
About local network gateways
The local network gateway typically refers to your on-premises location. In the classic deployment model, the local
network gateway is referred to as a Local Network Site. When you configure a local network gateway, you give it a
name, specify the public IP address of the on-premises VPN device, and specify the address prefixes that are in the
on-premises location. Azure looks at the destination address prefixes for network traffic, consults the configuration
that you have specified for the local network gateway, and routes packets accordingly. You can modify the address
prefixes as needed. For more information, see Local network gateways.
About gateway types
Selecting the correct gateway type for your topology is critical. If you select the wrong type, your gateway won't
work properly. The gateway type specifies how the gateway itself connects and is a required configuration setting
for the Resource Manager deployment model.
The gateway types are:
Vpn
ExpressRoute
About connection types
Each configuration requires a specific connection type. The connection types are:
IPsec
Vnet2Vnet
ExpressRoute
VPNClient
About VPN types
Each configuration requires a specific VPN type. If you are combining two configurations, such as creating a Site-
to-Site connection and a Point-to-Site connection to the same VNet, you must use a VPN type that satisfies both
connection requirements.
PolicyBased: PolicyBased VPNs were previously called static routing gateways in the classic deployment
model. Policy-based VPNs encrypt and direct packets through IPsec tunnels based on the IPsec policies
configured with the combinations of address prefixes between your on-premises network and the Azure
VNet. The policy (or traffic selector) is usually defined as an access list in the VPN device configuration. The
value for a PolicyBased VPN type is PolicyBased. When using a PolicyBased VPN, keep in mind the
following limitations:
PolicyBased VPNs can only be used on the Basic gateway SKU. This VPN type is not compatible with
other gateway SKUs.
You can have only 1 tunnel when using a PolicyBased VPN.
You can only use PolicyBased VPNs for S2S connections, and only for certain configurations. Most VPN
Gateway configurations require a RouteBased VPN.
RouteBased: RouteBased VPNs were previously called dynamic routing gateways in the classic deployment
model. RouteBased VPNs use "routes" in the IP forwarding or routing table to direct packets into their
corresponding tunnel interfaces. The tunnel interfaces then encrypt or decrypt the packets in and out of the
tunnels. The policy (or traffic selector) for RouteBased VPNs are configured as any-to-any (or wild cards). The
value for a RouteBased VPN type is RouteBased.
The following tables show the VPN type as it maps to each connection configuration. Make sure the VPN type for
your gateway matches the configuration that you want to create.
VPN type - Resource Manager deployment model

ROUTEBASED POLICYBASED

Site-to-Site Supported Supported

VNet-to-VNet Supported Not Supported

Multi-Site Supported Not Supported

S2S and ExpressRoute coexist Supported Not Supported

Point-to-Site Supported Not Supported

Classic to Resource Manager Supported Not Supported

VPN type - classic deployment model

DYNAMIC STATIC

Site-to-Site Supported Supported

VNet-to-VNet Supported Not Supported

Multi-Site Supported Not Supported

S2S and ExpressRoute coexist Supported Not Supported

Point-to-Site Supported Not Supported

Classic to Resource Manager Supported Not Supported

VPN devices for Site -to -Site connections


To configure a Site-to-Site connection, regardless of deployment model, you need the following items:
A VPN device that is compatible with Azure VPN gateways
A public-facing IPv4 IP address that is not behind a NAT
You need to have experience configuring your VPN device, or have someone that can configure the device for you.
To download VPN device configuration scripts:
Depending on the VPN device that you have, you may be able to download a VPN device configuration script. For
more information, see Download VPN device configuration scripts.
See the following links for additional configuration information:
For information about compatible VPN devices, see VPN Devices.
Before configuring your VPN device, check for any Known device compatibility issues for the VPN device
that you want to use.
For links to device configuration settings, see Validated VPN Devices. The device configuration links are
provided on a best-effort basis. It's always best to check with your device manufacturer for the latest
configuration information. The list shows the versions we have tested. If your OS is not on that list, it is still
possible that the version is compatible. Check with your device manufacturer to verify that OS version for
your VPN device is compatible.
For an overview of VPN device configuration, see Overview of 3rd party VPN device configurations.
For information about editing device configuration samples, see Editing samples.
For cryptographic requirements, see About cryptographic requirements and Azure VPN gateways.
For information about IPsec/IKE parameters, see About VPN devices and IPsec/IKE parameters for Site-to-
Site VPN gateway connections. This link shows information about IKE version, Diffie-Hellman Group,
Authentication method, encryption and hashing algorithms, SA lifetime, PFS, and DPD, in addition to other
parameter information that you need to complete your configuration.
For IPsec/IKE policy configuration steps, see Configure IPsec/IKE policy for S2S VPN or VNet-to-VNet
connections.
To connect multiple policy-based VPN devices, see Connect Azure VPN gateways to multiple on-premises
policy-based VPN devices using PowerShell.
Consider forced tunnel routing
For most configurations, you can configure forced tunneling. Forced tunneling lets you redirect or "force" all
Internet-bound traffic back to your on-premises location via a Site-to-Site VPN tunnel for inspection and auditing.
This is a critical security requirement for most enterprise IT policies.
Without forced tunneling, Internet-bound traffic from your VMs in Azure will always traverse from Azure network
infrastructure directly out to the Internet, without the option to allow you to inspect or audit the traffic.
Unauthorized Internet access can potentially lead to information disclosure or other types of security breaches.
A forced tunneling connection can be configured in both deployment models and by using different tools. For
more information, see Configure forced tunneling.
Forced tunneling diagram
Next steps
See the VPN Gateway FAQ and About VPN Gateway articles for more information to help you with your design.
For more information about specific gateway settings, see About VPN Gateway Settings.
ExpressRoute workflows for circuit provisioning and
circuit states
6/27/2017 • 4 minutes to read • Edit Online

This page walks you through the service provisioning and routing configuration workflows at a high level.

The following figure and corresponding steps show the tasks you must follow in order to have an ExpressRoute
circuit provisioned end-to-end.
1. Use PowerShell to configure an ExpressRoute circuit. Follow the instructions in the Create ExpressRoute circuits
article for more details.
2. Order connectivity from the service provider. This process varies. Contact your connectivity provider for more
details about how to order connectivity.
3. Ensure that the circuit has been provisioned successfully by verifying the ExpressRoute circuit provisioning state
through PowerShell.
4. Configure routing domains. If your connectivity provider manages Layer 3 for you, they will configure
routing for your circuit. If your connectivity provider only offers Layer 2 services, you must configure
routing per guidelines described in the routing requirements and routing configuration pages.
Enable Azure private peering - You must enable this peering to connect to VMs / cloud services deployed
within virtual networks.
Enable Azure public peering - You must enable Azure public peering if you wish to connect to Azure
services hosted on public IP addresses. This is a requirement to access Azure resources if you have
chosen to enable default routing for Azure private peering.
Enable Microsoft peering - You must enable this to access Office 365 and Dynamics 365.
IMPORTANT
You must ensure that you use a separate proxy / edge to connect to Microsoft than the one you use for the
Internet. Using the same edge for both ExpressRoute and the Internet will cause asymmetric routing and
cause connectivity outages for your network.

5. Linking virtual networks to ExpressRoute circuits - You can link virtual networks to your ExpressRoute circuit.
Follow instructions to link VNets to your circuit. These VNets can either be in the same Azure subscription as
the ExpressRoute circuit, or can be in a different subscription.

ExpressRoute circuit provisioning states


Each ExpressRoute circuit has two states:
Service provider provisioning state
Status
Status represents Microsoft's provisioning state. This property is set to Enabled when you create an Expressroute
circuit
The connectivity provider provisioning state represents the state on the connectivity provider's side. It can either be
NotProvisioned, Provisioning, or Provisioned. The ExpressRoute circuit must be in Provisioned state for you to be
able to use it.
Possible states of an ExpressRoute circuit
This section lists out the possible states for an ExpressRoute circuit.
At creation time
You will see the ExpressRoute circuit in the following state as soon as you run the PowerShell cmdlet to create the
ExpressRoute circuit.
ServiceProviderProvisioningState : NotProvisioned
Status : Enabled

When connectivity provider is in the process of provisioning the circuit


You will see the ExpressRoute circuit in the following state as soon as you pass the service key to the connectivity
provider and they have started the provisioning process.

ServiceProviderProvisioningState : Provisioning
Status : Enabled

When connectivity provider has completed the provisioning process


You will see the ExpressRoute circuit in the following state as soon as the connectivity provider has completed the
provisioning process.

ServiceProviderProvisioningState : Provisioned
Status : Enabled

Provisioned and Enabled is the only state the circuit can be in for you to be able to use it. If you are using a Layer 2
provider, you can configure routing for your circuit only when it is in this state.
When connectivity provider is deprovisioning the circuit
If you requested the service provider to deprovision the ExpressRoute circuit, you will see the circuit set to the
following state after the service provider has completed the deprovisioning process.

ServiceProviderProvisioningState : NotProvisioned
Status : Enabled

You can choose to re-enable it if needed, or run PowerShell cmdlets to delete the circuit.

IMPORTANT
If you run the PowerShell cmdlet to delete the circuit when the ServiceProviderProvisioningState is Provisioning or
Provisioned the operation will fail. Please work with your connectivity provider to deprovision the ExpressRoute circuit first
and then delete the circuit. Microsoft will continue to bill the circuit until you run the PowerShell cmdlet to delete the circuit.

Routing session configuration state


The BGP provisioning state lets you know if the BGP session has been enabled on the Microsoft edge. The state
must be enabled for you to be able to use the peering.
It is important to check the BGP session state especially for Microsoft peering. In addition to the BGP provisioning
state, there is another state called advertised public prefixes state. The advertised public prefixes state must be in
configured state, both for the BGP session to be up and for your routing to work end-to-end.
If the advertised public prefix state is set to a validation needed state, the BGP session is not enabled, as the
advertised prefixes did not match the AS number in any of the routing registries.
IMPORTANT
If the advertised public prefixes state is in manual validation state, you must open a support ticket with Microsoft support
and provide evidence that you own the IP addresses advertised along with the associated Autonomous System number.

Next steps
Configure your ExpressRoute connection.
Create an ExpressRoute circuit
Configure routing
Link a VNet to an ExpressRoute circuit
What is Azure Virtual Network?
5/7/2018 • 4 minutes to read • Edit Online

Azure Virtual Network enables many types of Azure resources, such as Azure Virtual Machines (VM ), to securely
communicate with each other, the internet, and on-premises networks. Azure Virtual Network provides the
following key capabilities:

Isolation and segmentation


You can implement multiple virtual networks within each Azure subscription and Azure region. Each virtual
network is isolated from other virtual networks. For each virtual network you can:
Specify a custom private IP address space using public and private (RFC 1918) addresses. Azure assigns
resources in a virtual network a private IP address from the address space that you assign.
Segment the virtual network into one or more subnets and allocate a portion of the virtual network's address
space to each subnet.
Use Azure-provided name resolution, or specify your own DNS server, for use by resources in a virtual
network.

Communicate with the internet


All resources in a virtual network can communicate outbound to the internet, by default. You can communicate
inbound to a resource by assigning a public IP address to it. To learn more, see Public IP addresses.

Communicate between Azure resources


Azure resources communicate securely with each other in one of the following ways:
Through a virtual network: You can deploy VMs, and several other types of Azure resources to a virtual
network, such as Azure App Service Environments, the Azure Kubernetes Service (AKS ), and Azure Virtual
Machine Scale Sets. To view a complete list of Azure resources that you can deploy into a virtual network, see
Virtual network service integration.
Through a virtual network service endpoint: Extend your virtual network private address space and the
identity of your virtual network to Azure service resources, such as Azure Storage accounts and Azure SQL
Databases, over a direct connection. Service endpoints allow you to secure your critical Azure service
resources to only a virtual network. To learn more, see Virtual network service endpoints overview.

Communicate with on-premises resources


You can connect your on-premises computers and networks to a virtual network using any combination of the
following options:
Point-to-site virtual private network (VPN ): Established between a virtual network and a single computer
in your network. Each computer that wants to establish connectivity with a virtual network must configure its
connection. This connection type is great if you're just getting started with Azure, or for developers, because it
requires little or no changes to your existing network. The communication between your computer and a
virtual network is sent through an encrypted tunnel over the internet. To learn more, see Point-to-site VPN.
Site-to-site VPN: Established between your on-premises VPN device and an Azure VPN Gateway that is
deployed in a virtual network. This connection type enables any on-premises resource that you authorize to
access a virtual network. The communication between your on-premises VPN device and an Azure VPN
gateway is sent through an encrypted tunnel over the internet. To learn more, see Site-to-site VPN.
Azure ExpressRoute: Established between your network and Azure, through an ExpressRoute partner. This
connection is private. Traffic does not go over the internet. To learn more, see ExpressRoute.

Filter network traffic


You can filter network traffic between subnets using either or both of the following options:
Network security groups: A network security group can contain multiple inbound and outbound security
rules that enable you to filter traffic to and from resources by source and destination IP address, port, and
protocol. To learn more, see Network security groups.
Network virtual appliances: A network virtual appliance is a VM that performs a network function, such as a
firewall, WAN optimization, or other network function. To view a list of available network virtual appliances
that you can deploy in a virtual network, see Azure Marketplace.

Route network traffic


Azure routes traffic between subnets, connected virtual networks, on-premises networks, and the Internet, by
default. You can implement either or both of the following options to override the default routes Azure creates:
Route tables: You can create custom route tables with routes that control where traffic is routed to for each
subnet. Learn more about route tables.
Border gateway protocol (BGP ) routes: If you connect your virtual network to your on-premises network
using an Azure VPN Gateway or ExpressRoute connection, you can propagate your on-premises BGP routes
to your virtual networks. Learn more about using BGP with Azure VPN Gateway and ExpressRoute.

Connect virtual networks


You can connect virtual networks to each other, enabling resources in either virtual network to communicate with
each other, using virtual network peering. The virtual networks you connect can be in the same, or different, Azure
regions. To learn more, see Virtual network peering.

Next steps
You now have an overview of Azure Virtual Network. To get started using a virtual network, create one, deploy a
few VMs to it, and communicate between the VMs. To learn how, see the Create a virtual network quickstart.
What is Azure Load Balancer?
6/1/2018 • 16 minutes to read • Edit Online

With Azure Load Balancer you can scale your applications and create high availability for your services. Load
Balancer supports inbound and outbound scenarios, provides low latency and high throughput, and scales up to
millions of flows for all TCP and UDP applications.
Load Balancer distributes new inbound flows that arrive on the load balancer's frontend to backend pool instances,
according to rules and health probes.
Additionally, a public load balancer can provide outbound connections for virtual machines (VMs) inside your
virtual network by translating their private IP addresses to public IP addresses.
Azure Load Balancer is available in two SKUs: Basic and Standard. There are differences in scale, features, and
pricing. Any scenario that's possible with Basic Load Balancer can also be created with Standard Load Balancer,
although the approaches might differ slightly. As you learn about Load Balancer, it is important to familiarize
yourself with the fundamentals and SKU -specific differences.

Why use Load Balancer?


You can use Azure Load Balancer to:
Load-balance incoming internet traffic to your VMs. This configuration is known as a public load balancer.
Load-balance traffic across VMs inside a virtual network. You can also reach a load balancer front end from an
on-premises network in a hybrid scenario. Both scenarios use a configuration that is known as an internal load
balancer.
Port forward traffic to a specific port on specific VMs with inbound network address translation (NAT) rules.
Provide outbound connectivity for VMs inside your virtual network by using a public load balancer.

NOTE
Azure provides a suite of fully managed load-balancing solutions for your scenarios. If you are looking for Transport Layer
Security (TLS) protocol termination ("SSL offload") or per-HTTP/HTTPS request, application-layer processing, review
Application Gateway. If you are looking for global DNS load balancing, review Traffic Manager. Your end-to-end scenarios
might benefit from combining these solutions as needed.

What are load balancer resources?


A load balancer resource can exist as either a public load balancer or an internal load balancer. The load balancer
resource's functions are expressed as a front end, a rule, a health probe, and a backend pool definition. You place
VMs into the backend pool by specifying the backend pool from the VM.
Load balancer resources are objects within which you can express how Azure should program its multi-tenant
infrastructure to achieve the scenario that you want to create. There is no direct relationship between load balancer
resources and actual infrastructure. Creating a load balancer doesn't create an instance, and capacity is always
available.

Fundamental Load Balancer features


Load Balancer provides the following fundamental capabilities for TCP and UDP applications:
Load balancing
With Azure Load Balancer, you can create a load-balancing rule to distribute traffic that arrives at frontend
to backend pool instances. Load Balancer uses a hash-based algorithm for distribution of inbound flows
and rewrites the headers of flows to backend pool instances accordingly. A server is available to receive new
flows when a health probe indicates a healthy backend endpoint.
By default, Load Balancer uses a 5-tuple hash composed of source IP address, source port, destination IP
address, destination port, and IP protocol number to map flows to available servers. You can choose to
create affinity to a specific source IP address by opting into a 2- or 3-tuple hash for a given rule. All packets
of the same packet flow arrive on the same instance behind the load-balanced front end. When the client
initiates a new flow from the same source IP, the source port changes. As a result, the 5-tuple might cause
the traffic to go to a different backend endpoint.
For more information, see Load balancer distribution mode. The following image displays the hash-based
distribution:

Figure: Hash-based distribution


Port forwarding
With Load Balancer, you can create an inbound NAT rule to port forward traffic from a specific port of a
specific frontend IP address to a specific port of a specific backend instance inside the virtual network. This
is also accomplished by the same hash-based distribution as load balancing. Common scenarios for this
capability are Remote Desktop Protocol (RDP ) or Secure Shell (SSH) sessions to individual VM instances
inside the Azure Virtual Network. You can map multiple internal endpoints to the various ports on the same
frontend IP address. You can use them to remotely administer your VMs over the internet without the need
for an additional jump box.
Application agnostic and transparent
Load Balancer does not directly interact with TCP or UDP or the application layer, and any TCP or UDP
application scenario can be supported. Load Balancer does not terminate or originate flows, interact with
the payload of the flow, provides no application layer gateway function, and protocol handshakes always
occur directly between the client and the backend pool instance. A response to an inbound flow is always a
response from a virtual machine. When the flow arrives on the virtual machine, the original source IP
address is also preserved. A couple of examples to further illustrate transparency:
Every endpoint is only answered by a VM. For example, a TCP handshake always occurs between the
client and the selected backend VM. A response to a request to a front end is a response generated by
backend VM. When you successfully validate connectivity to a frontend, you are validating the end to
end connectivity to at least one backend virtual machine.
Application payloads are transparent to Load Balancer and any UDP or TCP application can be
supported. For workloads which require per HTTP request processing or manipulation of application
layer payloads (for example, parsing of HTTP URLs), you should use a layer 7 load balancer like
Application Gateway.
Because Load Balancer is agnostic to the TCP payload and TLS offload ("SSL") is not provided, you can
build end to end encrypted scenarios using Load Balancer and gain large scale out for TLS applications
by terminating the TLS connection on the VM itself. For example, your TLS session keying capacity is
only limited by the type and number of VMs you add to the backend pool. If you require "SSL
offloading", application layer treatment, or wish to delegate certificate management to Azure, you should
use Azure's layer 7 load balancer Application Gateway instead.
Automatic reconfiguration
Load Balancer instantly reconfigures itself when you scale instances up or down. Adding or removing VMs
from the backend pool reconfigures the load balancer without additional operations on the load balancer
resource.
Health probes
To determine the health of instances in the backend pool, Load Balancer uses health probes that you define.
When a probe fails to respond, the load balancer stops sending new connections to the unhealthy instances.
Existing connections are not affected, and they continue until the application terminates the flow, an idle
timeout occurs, or the VM is shut down.
Three types of probes are supported:
HTTP custom probe: You can use this probe to create your own custom logic to determine the
health of a backend pool instance. The load balancer regularly probes your endpoint (every 15
seconds, by default). The instance is considered to be healthy if it responds with an HTTP 200 within
the timeout period (default of 31 seconds). Any status other than HTTP 200 causes this probe to fail.
This probe is also useful for implementing your own logic to remove instances from the load
balancer's rotation. For example, you can configure the instance to return a non-200 status if the
instance is greater than 90 percent CPU. This probe overrides the default guest agent probe.
TCP custom probe: This probe relies on establishing a successful TCP session to a defined probe
port. As long as the specified listener on the VM exists, this probe succeeds. If the connection is
refused, the probe fails. This probe overrides the default guest agent probe.
Guest agent probe: The load balancer can also utilize the guest agent inside the VM. The guest
agent listens and responds with an HTTP 200 OK response only when the instance is in the ready
state. If the agent fails to respond with an HTTP 200 OK, the load balancer marks the instance as
unresponsive and stops sending traffic to that instance. The load balancer continues to attempt to
reach the instance. If the guest agent responds with an HTTP 200, the load balancer sends traffic to
that instance again. Guest agent probes are a last resort and not recommended when HTTP or TCP
custom probe configurations are possible.
Outbound connections (SNAT)
All outbound flows from private IP addresses inside your virtual network to public IP addresses on the
internet can be translated to a frontend IP address of the load balancer. When a public front end is tied to a
backend VM by way of a load balancing rule, Azure programs outbound connections to be automatically
translated to the public frontend IP address.
Enable easy upgrade and disaster recovery of services, because the front end can be dynamically
mapped to another instance of the service.
Easier access control list (ACL ) management to. ACLs expressed in terms of frontend IPs do not
change as services scale up or down or get redeployed. Translating outbound connections to a
smaller number of IP addresses than machines can reduce the burden of whitelisting.
For more information, see outbound connections.
Standard Load Balancer has additional SKU -specific capabilities beyond these fundamentals. Review the
remainder of this article for details.

Load Balancer SKU comparison


Load Balancer supports both Basic and Standard SKUs, each differing in scenario scale, features, and pricing. Any
scenario that's possible with Basic Load Balancer can be created with Standard Load Balancer as well. In fact, the
APIs for both SKUs are similar and invoked through the specification of a SKU. The API for supporting SKUs for
Load Balancer and the public IP is available starting with the 2017-08-01 API. Both SKUs have the same general
API and structure.
However, depending on which SKU you choose, the complete scenario configuration might differ slightly. Load
Balancer documentation calls out when an article applies only to a specific SKU. To compare and understand the
differences, see the following table. For more information, see Standard Load Balancer overview.

NOTE
If you are using a newer design scenario, consider using Standard Load Balancer.

Standalone VMs, availability sets, and virtual machine scale sets can be connected to only one SKU, never both.
When you use them with public IP addresses, both Load Balancer and the public IP address SKU must match.
Load Balancer and public IP SKUs are not mutable.
It is a best practice to specify the SKUs explicitly, even though it is not yet mandatory. At this time, required
changes are being kept to a minimum. If a SKU is not specified, it is interpreted as an intention to use the 2017-
08-01 API version of the Basic SKU.

IMPORTANT
Standard Load Balancer is a new Load Balancer product and largely a superset of Basic Load Balancer. There are important
and deliberate differences between the two products. Any end-to-end scenario that's possible with Basic Load Balancer can
also be created with Standard Load Balancer. If you're already used to Basic Load Balancer, you should familiarize yourself
with Standard Load Balancer to understand the latest changes in behavior between Standard and Basic and their impact.
Review this section carefully.

STANDARD SKU BASIC SKU

backend pool size Up to 1000 instances. Up to 100 instances.

backend pool endpoints Any VM in a single virtual network, VMs in a single availability set or virtual
including a blend of VMs, availability machine scale set.
sets, and virtual machine scale sets.
STANDARD SKU BASIC SKU

Azure Availability Zones Zone-redundant and zonal front ends /


for inbound and outbound, outbound
flow mappings survive zone failure,
cross-zone load balancing.

Diagnostics Azure Monitor, multi-dimensional Azure Log Analytics for public load
metrics including byte and packet balancer only, SNAT exhaustion alert,
counters, health probe status, backend pool health count.
connection attempts (TCP SYN),
outbound connection health (SNAT
successful and failed flows), active data
plane measurements.

HA Ports Internal load balancer. /

Secure by default By default, closed for public IP and load Default open, network security group
balancer endpoints. For traffic to flow, a optional.
network security group must be used
to explicitly whitelist entities.

Outbound connections Multiple front ends with per-rule opt- Single front end, selected at random
out. An outbound scenario must be when multiple front ends are present.
explicitly created for the VM to be able When only an internal load balancer is
to use outbound connectivity. Virtual serving a VM, the default SNAT is used.
network service endpoints can be
reached without outbound connectivity
and do not count toward data
processed. Any public IP addresses,
including Azure PaaS services that are
unavailable as virtual network service
endpoints, must be reached via
outbound connectivity and count
toward data processed. When only an
internal load balancer is serving a VM,
outbound connections via default SNAT
are unavailable. Outbound SNAT
programming is transport-protocol
specific, based on the protocol of the
inbound load-balancing rule.

Multiple front ends Inbound and outbound. Inbound only.

Management Operations Most operations < 30 seconds. 60-90+ seconds typical.

SLA 99.99 percent for a data path with two Implicit in the VM SLA.
healthy VMs.

Pricing Charges are based on the number of No charge.


rules and data processed inbound or
outbound that are associated with the
resource.

For more information, see service limits for Load Balancer. For Standard Load Balancer details, see overview,
pricing, and SL A.

Concepts
Public load balancer
A public load balancer maps the public IP address and port number of incoming traffic to the private IP address
and port number of the VM, and vice versa for the response traffic from the VM. By applying load-balancing rules,
you can distribute specific types of traffic across multiple VMs or services. For example, you can spread the load of
web request traffic across multiple web servers.
The following figure shows a load-balanced endpoint for web traffic that is shared among three VMs for the public
and private TCP port 80. These three VMs are in a load-balanced set.

Figure: Load balancing web traffic by using a public load balancer


When internet clients send webpage requests to the public IP address of a web app on TCP port 80, Azure Load
Balancer distributes the requests across the three VMs in the load-balanced set. For more information about load
balancer algorithms, see the load balancer features section of this article.
By default, Azure Load Balancer distributes network traffic equally among multiple VM instances. You can also
configure session affinity. For more information, see load balancer distribution mode.
Internal load balancer
An internal load balancer directs traffic only to resources that are inside a virtual network or that use a VPN to
access Azure infrastructure. In this respect, an internal load balancer differs from a public load balancer. Azure
infrastructure restricts access to the load-balanced frontend IP addresses of a virtual network. frontend IP
addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business
applications run in Azure and are accessed from within Azure or from on-premises resources.
An internal load balancer enables the following types of load balancing:
Within a virtual network: Load balancing from VMs in the virtual network to a set of VMs that reside within
the same virtual network.
For a cross-premises virtual network: Load balancing from on-premises computers to a set of VMs that
reside within the same virtual network.
For multi-tier applications: Load balancing for internet-facing multi-tier applications where the backend tiers
are not internet-facing. The backend tiers require traffic load-balancing from the internet-facing tier (see the
next figure).
For line-of-business applications: Load balancing for line-of-business applications that are hosted in Azure
without additional load balancer hardware or software. This scenario includes on-premises servers that are in
the set of computers whose traffic is load-balanced.

Figure: Load balancing multi-tier applications by using both public and internal load balancers

Pricing
Standard Load Balancer usage is charged based on the number of configured load-balancing rules and the
amount of processed inbound and outbound data. For Standard Load Balancer pricing information, go to the Load
Balancer pricing page.
Basic Load Balancer is offered at no charge.

SLA
For information about the Standard Load Balancer SL A, go to the Load Balancer SL A page.

Limitations
Load Balancer is a TCP or UDP product for load balancing and port forwarding for these specific IP protocols.
Load balancing rules and inbound NAT rules are supported for TCP and UDP and not supported for other IP
protocols including ICMP. Load Balancer does not terminate, respond, or otherwise interact with the payload of
a UDP or TCP flow. It is not a proxy. Successful validation of connectivity to a frontend must take place in-band
with the same protocol used in a load balancing or inbound NAT rule (TCP or UDP ) and at least one of your
virtual machines must generate a response for a client to see a response from a frontend. Not receiving an in-
band response from the Load Balancer frontend indicates no virtual machines were able to respond. It is not
possible to interact with a Load Balancer frontend without a virtual machine able to respond. This also applies
to outbound connections where port masquerade SNAT is only supported for TCP and UDP; any other IP
protocols including ICMP will also fail. Assign an instance-level Public IP address to mitigate.
Unlike public Load Balancers which provide outbound connections when transitioning from private IP
addresses inside the virtual network to public IP addresses, internal Load Balancers do not translate outbound
originated connections to the frontend of an internal Load Balancer as both are in private IP address space. This
avoids potential for SNAT exhaustion inside unique internal IP address space where translation is not required.
The side effect is that if an outbound flow from a VM in the backend pool attempts a flow to frontend of the
internal Load Balancer in which pool it resides and is mapped back to itself, both legs of the flow don't match
and the flow will fail. If the flow did not map back to the same VM in the backend pool which created the flow
to the frontend, the flow will succeed. When the flow maps back to itself the outbound flow appears to originate
from the VM to the frontend and the corresponding inbound flow appears to originate from the VM to itself.
From the guest OS's point of view, the inbound and outbound parts of the same flow don't match inside the
virtual machine. The TCP stack will not recognize these halves of the same flow as being part of the same flow
as the source and destination don't match. When the flow maps to to any other VM in the backend pool, the
halves of the flow will match and the VM can successfully respond to the flow. The symptom for this scenario is
intermittent connection timeouts. There are several common workarounds for reliably achieving this scenario
(originating flows from a backend pool to the backend pools respective internal Load Balancer frontend) which
include either insertion of a third party proxy behind the internal Load Balancer or using DSR style rules. While
you could use a public Load Balancer to mitigate, the resulting scenario is prone to SNAT exhaustion and
should be avoided unless carefully managed.

Next steps
You now have an overview of Azure Load Balancer. To get started with using a load balancer, create one, create
VMs with a custom IIS extension installed, and load-balance the web app between the VMs. To learn how, see the
Create a Basic Load Balancer quickstart.
What is Azure Application Gateway?
4/25/2018 • 4 minutes to read • Edit Online

Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web
applications.
Traditional load balancers operate at the transport layer (OSI layer 4 - TCP and UDP ) and route traffic based on
source IP address and port, to a destination IP address and port. But with the Application Gateway you can be
even more specific. For example, you can route traffic based on the incoming URL. So if /images is in the
incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If /video
is in the URL, that traffic is routed to another pool optimized for videos.

This type of routing is known as application layer (OSI layer 7) load balancing. Azure Application Gateway can do
URL -based routing and more. The following features are included with Azure Application Gateway:

URL-based routing
URL Path Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request.
One of the scenarios is to route requests for different content types to different pool.
For example, requests for are routed to VideoServerPool, and
http://contoso.com/video/*
http://contoso.com/images/* are routed to ImageServerPool. DefaultServerPool is selected if none of the path
patterns match.

Redirection
A common scenario for many web applications is to support automatic HTTP to HTTPS redirection to ensure all
communication between an application and its users occurs over an encrypted path.
In the past, you may have used techniques such as creating a dedicated pool whose sole purpose is to redirect
requests it receives on HTTP to HTTPS. Application gateway supports the ability to redirect traffic on the
Application Gateway. This simplifies application configuration, optimizes the resource usage, and supports new
redirection scenarios, including global and path-based redirection. Application Gateway redirection support is not
limited to HTTP to HTTPS redirection alone. This is a generic redirection mechanism, so you can redirect from
and to any port you define using rules. It also supports redirection to an external site as well.
Application Gateway redirection support offers the following capabilities:
Global redirection from one port to another port on the Gateway. This enables HTTP to HTTPS redirection on
a site.
Path-based redirection. This type of redirection enables HTTP to HTTPS redirection only on a specific site area,
for example a shopping cart area denoted by /cart/* .
Redirect to an external site.

Multiple-site hosting
Multiple-site hosting enables you to configure more than one web site on the same application gateway instance.
This feature allows you to configure a more efficient topology for your deployments by adding up to 20 web sites
to one application gateway. Each web site can be directed to its own pool. For example, application gateway can
serve traffic for contoso.com and fabrikam.com from two server pools called ContosoServerPool and
FabrikamServerPool.
Requests for http://contoso.com are routed to ContosoServerPool, and http://fabrikam.com are routed to
FabrikamServerPool.
Similarly, two subdomains of the same parent domain can be hosted on the same application gateway
deployment. Examples of using subdomains could include http://blog.contoso.com and http://app.contoso.com
hosted on a single application gateway deployment.

Session affinity
The cookie-based session affinity feature is useful when you want to keep a user session on the same server. By
using gateway-managed cookies, the Application Gateway can direct subsequent traffic from a user session to the
same server for processing. This is important in cases where session state is saved locally on the server for a user
session.

Secure Sockets Layer (SSL) termination


Application gateway supports SSL termination at the gateway, after which traffic typically flows unencrypted to
the backend servers. This feature allows web servers to be unburdened from costly encryption and decryption
overhead. However, sometimes unencrypted communication to the servers is not an acceptable option. This could
be due to security requirements, compliance requirements, or the application may only accept a secure
connection. For such applications, application gateway supports end to end SSL encryption.

Web application firewall


Web application firewall (WAF ) is a feature of Application Gateway that provides centralized protection of your
web applications from common exploits and vulnerabilities. WAF is based on rules from the OWASP (Open Web
Application Security Project) core rule sets 3.0 or 2.2.9.
Web applications are increasingly targets of malicious attacks that exploit common known vulnerabilities.
Common among these exploits are SQL injection attacks, cross site scripting attacks to name a few. Preventing
such attacks in application code can be challenging and may require rigorous maintenance, patching and
monitoring at many layers of the application topology. A centralized web application firewall helps make security
management much simpler and gives better assurance to application administrators against threats or intrusions.
A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location
versus securing each of individual web applications. Existing application gateways can be converted to a web
application firewall enabled application gateway easily.

Websocket and HTTP/2 traffic


Application Gateway provides native support for the WebSocket and HTTP/2 protocols. There's no user-
configurable setting to selectively enable or disable WebSocket support. HTTP/2 support can be enabled using
Azure PowerShell.
The WebSocket and HTTP/2 protocols enable full duplex communication between a server and a client over a
long running TCP connection. This allows for a more interactive communication between the web server and the
client, which can be bidirectional without the need for polling as required in HTTP -based implementations. These
protocols have low overhead, unlike HTTP, and can reuse the same TCP connection for multiple
request/responses resulting in a more efficient utilization of resources. These protocols are designed to work over
traditional HTTP ports of 80 and 443.

Next steps
Depending on your requirements and environment, you can create a test Application Gateway using either the
Azure portal, Azure PowerShell, or Azure CLI:
Quickstart: Direct web traffic with Azure Application Gateway - Azure portal.
Quickstart: Direct web traffic with Azure Application Gateway - Azure PowerShell
Quickstart: Direct web traffic with Azure Application Gateway - Azure CLI
What is Azure DNS?
6/13/2018 • 2 minutes to read • Edit Online

Azure DNS is a hosting service for DNS domains, providing name resolution using Microsoft Azure infrastructure.
By hosting your domains in Azure, you can manage your DNS records using the same credentials, APIs, tools, and
billing as your other Azure services.
You can't use Azure DNS to buy a domain name. For an annual fee, you can buy a domain name using Azure Web
Apps or a third-party domain name registrar. Your domains can then be hosted in Azure DNS for record
management. See Delegate a Domain to Azure DNS for details.
The following features are included with Azure DNS:

Reliability and performance


DNS domains in Azure DNS are hosted on Azure's global network of DNS name servers. Azure DNS uses anycast
networking so that each DNS query is answered by the closest available DNS server. This provides both fast
performance and high availability for your domain.

Security
The Azure DNS service is based on Azure Resource Manager. So, you get Resource Manager features such as:
role-based access control - to control who has access to specific actions for your organization.
activity logs - to monitor how a user in your organization modified a resource or to find an error when
troubleshooting.
resource locking - to lock a subscription, resource group, or resource to prevent other users in your
organization from accidentally deleting or modifying critical resources.
For more information, see How to protect DNS zones and records.

Ease of use
The Azure DNS service can manage DNS records for your Azure services, and can provide DNS for your external
resources as well. Azure DNS is integrated in the Azure portal and uses the same credentials, support contract, and
billing as your other Azure services.
DNS billing is based on the number of DNS zones hosted in Azure and by the number of DNS queries. To learn
more about pricing, see Azure DNS Pricing.
Your domains and records can be managed using the Azure portal, Azure PowerShell cmdlets, and the cross-
platform Azure CLI. Applications requiring automated DNS management can integrate with the service using the
REST API and SDKs.

Customizable virtual networks with private domains


Azure DNS also supports private DNS domains, which is now in public preview. This allows you to use your own
custom domain names in your private virtual networks rather than the Azure-provided names available today.
For more information, see Using Azure DNS for private domains.
Next steps
Learn about DNS zones and records: DNS zones and records overview.
Learn how to create a zone in Azure DNS: Create a DNS zone.
For frequently asked questions about Azure DNS, see the Azure DNS FAQ.
Overview of Traffic Manager
7/20/2017 • 5 minutes to read • Edit Online

Microsoft Azure Traffic Manager allows you to control the distribution of user traffic for service endpoints in
different datacenters. Service endpoints supported by Traffic Manager include Azure VMs, Web Apps, and cloud
services. You can also use Traffic Manager with external, non-Azure endpoints.
Traffic Manager uses the Domain Name System (DNS ) to direct client requests to the most appropriate endpoint
based on a traffic-routing method and the health of the endpoints. Traffic Manager provides a range of traffic-
routing methods and endpoint monitoring options to suit different application needs and automatic failover
models. Traffic Manager is resilient to failure, including the failure of an entire Azure region.

Traffic Manager benefits


Traffic Manager can help you:
Improve availability of critical applications
Traffic Manager delivers high availability for your applications by monitoring your endpoints and providing
automatic failover when an endpoint goes down.
Improve responsiveness for high-performance applications
Azure allows you to run cloud services or websites in datacenters located around the world. Traffic
Manager improves application responsiveness by directing traffic to the endpoint with the lowest network
latency for the client.
Perform service maintenance without downtime
You can perform planned maintenance operations on your applications without downtime. Traffic Manager
directs traffic to alternative endpoints while the maintenance is in progress.
Combine on-premises and Cloud-based applications
Traffic Manager supports external, non-Azure endpoints enabling it to be used with hybrid cloud and on-
premises deployments, including the "burst-to-cloud," "migrate-to-cloud," and "failover-to-cloud"
scenarios.
Distribute traffic for large, complex deployments
Using nested Traffic Manager profiles, traffic-routing methods can be combined to create sophisticated and
flexible rules to support the needs of larger, more complex deployments.

How Traffic Manager works


Azure Traffic Manager enables you to control the distribution of traffic across your application endpoints. An
endpoint is any Internet-facing service hosted inside or outside of Azure.
Traffic Manager provides two key benefits:
1. Distribution of traffic according to one of several traffic-routing methods
2. Continuous monitoring of endpoint health and automatic failover when endpoints fail
When a client attempts to connect to a service, it must first resolve the DNS name of the service to an IP address.
The client then connects to that IP address to access the service.
The most important point to understand is that Traffic Manager works at the DNS level. Traffic Manager
uses DNS to direct clients to specific service endpoints based on the rules of the traffic-routing method. Clients
connect to the selected endpoint directly. Traffic Manager is not a proxy or a gateway. Traffic Manager does not
see the traffic passing between the client and the service.
Traffic Manager example
Contoso Corp have developed a new partner portal. The URL for this portal is
https://partners.contoso.com/login.aspx. The application is hosted in three regions of Azure. To improve
availability and maximize global performance, they use Traffic Manager to distribute client traffic to the closest
available endpoint.
To achieve this configuration, they complete the following steps:
1. Deploy three instances of their service. The DNS names of these deployments are 'contoso-us.cloudapp.net',
'contoso-eu.cloudapp.net', and 'contoso-asia.cloudapp.net'.
2. Create a Traffic Manager profile, named 'contoso.trafficmanager.net', and configure it to use the 'Performance'
traffic-routing method across the three endpoints.
3. Configure their vanity domain name, 'partners.contoso.com', to point to 'contoso.trafficmanager.net', using a
DNS CNAME record.

NOTE
When using a vanity domain with Azure Traffic Manager, you must use a CNAME to point your vanity domain name to your
Traffic Manager domain name. DNS standards do not allow you to create a CNAME at the 'apex' (or root) of a domain. Thus
you cannot create a CNAME for 'contoso.com' (sometimes called a 'naked' domain). You can only create a CNAME for a
domain under 'contoso.com', such as 'www.contoso.com'. To work around this limitation, we recommend using a simple
HTTP redirect to direct requests for 'contoso.com' to an alternative name such as 'www.contoso.com'.

How clients connect using Traffic Manager


Continuing from the previous example, when a client requests the page https://partners.contoso.com/login.aspx,
the client performs the following steps to resolve the DNS name and establish a connection:
1. The client sends a DNS query to its configured recursive DNS service to resolve the name
'partners.contoso.com'. A recursive DNS service, sometimes called a 'local DNS' service, does not host DNS
domains directly. Rather, the client off-loads the work of contacting the various authoritative DNS services
across the Internet needed to resolve a DNS name.
2. To resolve the DNS name, the recursive DNS service finds the name servers for the 'contoso.com' domain. It
then contacts those name servers to request the 'partners.contoso.com' DNS record. The contoso.com DNS
servers return the CNAME record that points to contoso.trafficmanager.net.
3. Next, the recursive DNS service finds the name servers for the 'trafficmanager.net' domain, which are
provided by the Azure Traffic Manager service. It then sends a request for the 'contoso.trafficmanager.net'
DNS record to those DNS servers.
4. The Traffic Manager name servers receive the request. They choose an endpoint based on:
The configured state of each endpoint (disabled endpoints are not returned)
The current health of each endpoint, as determined by the Traffic Manager health checks. For more
information, see Traffic Manager Endpoint Monitoring.
The chosen traffic-routing method. For more information, see Traffic Manager Routing Methods.
5. The chosen endpoint is returned as another DNS CNAME record. In this case, let us suppose contoso-
us.cloudapp.net is returned.
6. Next, the recursive DNS service finds the name servers for the 'cloudapp.net' domain. It contacts those name
servers to request the 'contoso-us.cloudapp.net' DNS record. A DNS 'A' record containing the IP address of
the US -based service endpoint is returned.
7. The recursive DNS service consolidates the results and returns a single DNS response to the client.
8. The client receives the DNS results and connects to the given IP address. The client connects to the application
service endpoint directly, not through Traffic Manager. Since it is an HTTPS endpoint, the client performs the
necessary SSL/TLS handshake, and then makes an HTTP GET request for the '/login.aspx' page.
The recursive DNS service caches the DNS responses it receives. The DNS resolver on the client device also
caches the result. Caching enables subsequent DNS queries to be answered more quickly by using data from the
cache rather than querying other name servers. The duration of the cache is determined by the 'time-to-live' (TTL )
property of each DNS record. Shorter values result in faster cache expiry and thus more round-trips to the Traffic
Manager name servers. Longer values mean that it can take longer to direct traffic away from a failed endpoint.
Traffic Manager allows you to configure the TTL used in Traffic Manager DNS responses to be as low as 0
seconds and as high as 2,147,483,647 seconds (the maximum range compliant with RFC -1035), enabling you to
choose the value that best balances the needs of your application.

Pricing
For pricing information, see Traffic Manager Pricing.

FAQ
For frequently asked questions about Traffic Manager, see Traffic Manager FAQs

Next steps
Learn more about Traffic Manager endpoint monitoring and automatic failover.
Learn more about Traffic Manager traffic routing methods.
Learn about some of the other key networking capabilities of Azure.
What is VPN Gateway?
4/25/2018 • 10 minutes to read • Edit Online

A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an
Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to
send encrypted traffic between Azure virtual networks over the Microsoft network. Each virtual network can have
only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you
create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.

What is a virtual network gateway?


A virtual network gateway is composed of two or more virtual machines that are deployed to a specific subnet
you create, which is called the gateway subnet. The VMs that are located in the gateway subnet are created when
you create the virtual network gateway. Virtual network gateway VMs are configured to contain routing tables
and gateway services specific to the gateway. You can't directly configure the VMs that are part of the virtual
network gateway and you should never deploy additional resources to the gateway subnet.
Creating a virtual network gateway can take up to 45 minutes to complete. When you create a virtual network
gateway, gateway VMs are deployed to the gateway subnet and configured with the settings that you specify.
One of the settings you configure is the gateway type. The gateway type 'vpn' specifies that the type of virtual
network gateway created is a VPN gateway. After you create a VPN gateway, you can create an IPsec/IKE VPN
tunnel connection between that VPN gateway and another VPN gateway (VNet-to-VNet), or create a cross-
premises IPsec/IKE VPN tunnel connection between the VPN gateway and an on-premises VPN device (Site-to-
Site). You can also create a Point-to-Site VPN connection (VPN over IKEv2 or SSTP ), which lets you connect to
your virtual network from a remote location, such as from a conference or from home.

Configuring a VPN Gateway


A VPN gateway connection relies on multiple resources that are configured with specific settings. Most of the
resources can be configured separately, although some resources must be configured in a certain order.
Settings
The settings that you chose for each resource are critical to creating a successful connection. For information
about individual resources and settings for VPN Gateway, see About VPN Gateway settings. The article contains
information to help you understand gateway types, gateway SKUs, VPN types, connection types, gateway
subnets, local network gateways, and various other resource settings that you may want to consider.
Deployment tools
You can start out creating and configuring resources using one configuration tool, such as the Azure portal. You
can later decide to switch to another tool, such as PowerShell, to configure additional resources, or modify
existing resources when applicable. Currently, you can't configure every resource and resource setting in the
Azure portal. The instructions in the articles for each connection topology specify when a specific configuration
tool is needed.
Deployment model
There are currently two deployment models for Azure. When you configure a VPN gateway, the steps you take
depend on the deployment model that you used to create your virtual network. For example, if you created your
VNet using the classic deployment model, you use the guidelines and instructions for the classic deployment
model to create and configure your VPN gateway settings. For more information about deployment models, see
Understanding Resource Manager and classic deployment models.
Planning table
The following table can help you decide the best connectivity option for your solution.

POINT-TO-SITE SITE-TO-SITE EXPRESSROUTE

Azure Supported Services Cloud Services and Virtual Cloud Services and Virtual Services list
Machines Machines

Typical Bandwidths Based on the gateway SKU Typically < 1 Gbps 50 Mbps, 100 Mbps, 200
aggregate Mbps, 500 Mbps, 1 Gbps, 2
Gbps, 5 Gbps, 10 Gbps

Protocols Supported Secure Sockets Tunneling IPsec Direct connection over


Protocol (SSTP) and IPsec VLANs, NSP's VPN
technologies (MPLS, VPLS,...)

Routing RouteBased (dynamic) We support PolicyBased BGP


(static routing) and
RouteBased (dynamic
routing VPN)

Connection resiliency active-passive active-passive or active- active-active


active

Typical use case Prototyping, dev / test / lab Dev / test / lab scenarios Access to all Azure services
scenarios for cloud services and small scale production (validated list), Enterprise-
and virtual machines workloads for cloud services class and mission critical
and virtual machines workloads, Backup, Big Data,
Azure as a DR site

SLA SLA SLA SLA

Pricing Pricing Pricing Pricing

Technical Documentation VPN Gateway VPN Gateway ExpressRoute


Documentation Documentation Documentation

FAQ VPN Gateway FAQ VPN Gateway FAQ ExpressRoute FAQ

Gateway SKUs
When you create a virtual network gateway, you specify the gateway SKU that you want to use. Select the SKU
that satisfies your requirements based on the types of workloads, throughputs, features, and SL As. For more
information about gateway SKUs, including supported features, production and dev-test, and configuration steps,
see Gateway SKUs.
Gateway SKUs by tunnel, connection, and throughput
S2S/VNET-TO-VNET P2S AGGREGATE
SKU TUNNELS CONNECTIONS THROUGHPUT BENCHMARK

VpnGw1 Max. 30 Max. 128* 650 Mbps

VpnGw2 Max. 30 Max. 128* 1 Gbps

VpnGw3 Max. 30 Max. 128* 1.25 Gbps


S2S/VNET-TO-VNET P2S AGGREGATE
SKU TUNNELS CONNECTIONS THROUGHPUT BENCHMARK

Basic Max. 10 Max. 128 100 Mbps

*Contact support if additional connections are needed


Aggregate Throughput Benchmark is based on measurements of multiple tunnels aggregated through a
single gateway. It is not a guaranteed throughput due to Internet traffic conditions and your application
behaviors.
Pricing information can be found on the Pricing page.
SL A (Service Level Agreement) information can be found on the SL A page.
VpnGw1, VpnGw2, and VpnGw3 are supported for VPN gateways using the Resource Manager
deployment model only.

Connection topology diagrams


It's important to know that there are different configurations available for VPN gateway connections. You need to
determine which configuration best fits your needs. In the sections below, you can view information and topology
diagrams about the following VPN gateway connections: The following sections contain tables which list:
Available deployment model
Available configuration tools
Links that take you directly to an article, if available
Use the diagrams and descriptions to help select the connection topology to match your requirements. The
diagrams show the main baseline topologies, but it's possible to build more complex configurations using the
diagrams as a guideline.

Site-to-Site and Multi-Site (IPsec/IKE VPN tunnel)


Site -to -Site
A Site-to-Site (S2S ) VPN gateway connection is a connection over IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. S2S
connections can be used for cross-premises and hybrid configurations. A S2S connection requires a VPN device
located on-premises that has a public IP address assigned to it and is not located behind a NAT. For information
about selecting a VPN device, see the VPN Gateway FAQ - VPN devices.

Multi-Site
This type of connection is a variation of the Site-to-Site connection. You create more than one VPN connection
from your virtual network gateway, typically connecting to multiple on-premises sites. When working with
multiple connections, you must use a RouteBased VPN type (known as a dynamic gateway when working with
classic VNets). Because each virtual network can only have one VPN gateway, all connections through the
gateway share the available bandwidth. This type of connection is often called a "multi-site" connection.
Deployment models and methods for Site -to -Site and Multi-Site
DEPLOYMENT
MODEL/METHOD AZURE PORTAL POWERSHELL AZURE CLI

Resource Manager Article Article Article


Article+

Classic Article** Article+ Not Supported

(**) denotes that this method contains steps that require PowerShell.
(+) denotes that this article is written for multi-site connections.

Point-to-Site (VPN over IKEv2 or SSTP)


A Point-to-Site (P2S ) VPN gateway connection lets you create a secure connection to your virtual network from
an individual client computer. A P2S connection is established by starting it from the client computer. This
solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from
home or a conference. P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few
clients that need to connect to a VNet.
Unlike S2S connections, P2S connections do not require an on-premises public-facing IP address or a VPN
device. P2S connections can be used with S2S connections through the same VPN gateway, as long as all the
configuration requirements for both connections are compatible. For more information about Point-to-Site
connections, see About Point-to-Site VPN.
Deployment models and methods for P2S
Azure native certificate authentication

DEPLOYMENT MODEL/METHOD AZURE PORTAL POWERSHELL

Resource Manager Article Article

Classic Article Supported

RADIUS authentication

DEPLOYMENT MODEL/METHOD AZURE PORTAL POWERSHELL

Resource Manager Supported Article

Classic Not Supported Not Supported

VNet-to-VNet connections (IPsec/IKE VPN tunnel)


Connecting a virtual network to another virtual network (VNet-to-VNet) is similar to connecting a VNet to an on-
premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE.
You can even combine VNet-to-VNet communication with multi-site connection configurations. This lets you
establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity.
The VNets you connect can be:
in the same or different regions
in the same or different subscriptions
in the same or different deployment models
Connections between deployment models
Azure currently has two deployment models: classic and Resource Manager. If you have been using Azure for
some time, you probably have Azure VMs and instance roles running in a classic VNet. Your newer VMs and role
instances may be running in a VNet created in Resource Manager. You can create a connection between the
VNets to allow the resources in one VNet to communicate directly with resources in another.
VNet peering
You may be able to use VNet peering to create your connection, as long as your virtual network meets certain
requirements. VNet peering does not use a virtual network gateway. For more information, see VNet peering.
Deployment models and methods for VNet-to -VNet
DEPLOYMENT
MODEL/METHOD AZURE PORTAL POWERSHELL AZURE CLI

Classic Article* Supported Not Supported

Resource Manager Article+ Article Article

Connections between Article* Article Not Supported


different deployment
models

(+) denotes this deployment method is available only for VNets in the same subscription.
(*) denotes that this deployment method also requires PowerShell.

ExpressRoute (private connection)


ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection
facilitated by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud
services, such as Microsoft Azure, Office 365, and CRM Online. Connectivity can be from an any-to-any (IP VPN )
network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a co-
location facility.
ExpressRoute connections do not go over the public Internet. This allows ExpressRoute connections to offer more
reliability, faster speeds, lower latencies, and higher security than typical connections over the Internet.
An ExpressRoute connection uses a virtual network gateway as part of its required configuration. In an
ExpressRoute connection, the virtual network gateway is configured with the gateway type 'ExpressRoute', rather
than 'Vpn'. While traffic that travels over an ExpressRoute circuit is not encrypted by default, it is possible create a
solution that allows you to send encrypted traffic over an ExpressRoute circuit. For more information about
ExpressRoute, see the ExpressRoute technical overview.

Site-to-Site and ExpressRoute coexisting connections


ExpressRoute is a direct, private connection from your WAN (not over the public Internet) to Microsoft Services,
including Azure. Site-to-Site VPN traffic travels encrypted over the public Internet. Being able to configure Site-
to-Site VPN and ExpressRoute connections for the same virtual network has several advantages.
You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute, or use Site-to-Site VPNs to
connect to sites that are not part of your network, but that are connected through ExpressRoute. Notice that this
configuration requires two virtual network gateways for the same virtual network, one using the gateway type
'Vpn', and the other using the gateway type 'ExpressRoute'.

Deployment models and methods for S2S and ExpressRoute coexist


DEPLOYMENT MODEL/METHOD AZURE PORTAL POWERSHELL

Resource Manager Not Supported Article

Classic Not Supported Article

Pricing
You pay for two things: the hourly compute costs for the virtual network gateway, and the egress data transfer
from the virtual network gateway. Pricing information can be found on the Pricing page.
Virtual network gateway compute costs
Each virtual network gateway has an hourly compute cost. The price is based on the gateway SKU that you
specify when you create a virtual network gateway. The cost is for the gateway itself and is in addition to the data
transfer that flows through the gateway.
Data transfer costs
Data transfer costs are calculated based on egress traffic from the source virtual network gateway.
If you are sending traffic to your on-premises VPN device, it will be charged with the Internet egress data
transfer rate.
If you are sending traffic between virtual networks in different regions, the pricing is based on the region.
If you are sending traffic only between virtual networks that are in the same region, there are no data costs.
Traffic between VNets in the same region is free.
For more information about gateway SKUs for VPN Gateway, see Gateway SKUs.

FAQ
For frequently asked questions about VPN gateway, see the VPN Gateway FAQ.
Next steps
Plan your VPN gateway configuration. See VPN Gateway Planning and Design.
View the VPN Gateway FAQ for additional information.
View the Subscription and service limits.
Learn about some of the other key networking capabilities of Azure.
ExpressRoute overview
3/12/2018 • 5 minutes to read • Edit Online

Microsoft Azure ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private
connection facilitated by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft
cloud services, such as Microsoft Azure, Office 365, and Dynamics 365.
Connectivity can be from an any-to-any (IP VPN ) network, a point-to-point Ethernet network, or a virtual cross-
connection through a connectivity provider at a co-location facility. ExpressRoute connections do not go over the
public Internet. This lets ExpressRoute connections offer more reliability, faster speeds, lower latencies, and higher
security than typical connections over the Internet. For information on how to connect your network to Microsoft
using ExpressRoute, see ExpressRoute connectivity models.

Key benefits
Layer 3 connectivity between your on-premises network and the Microsoft Cloud through a connectivity
provider. Connectivity can be from an any-to-any (IPVPN ) network, a point-to-point Ethernet connection, or
through a virtual cross-connection via an Ethernet exchange.
Connectivity to Microsoft cloud services across all regions in the geopolitical region.
Global connectivity to Microsoft services across all regions with ExpressRoute premium add-on.
Dynamic routing between your network and Microsoft over industry standard protocols (BGP ).
Built-in redundancy in every peering location for higher reliability.
Connection uptime SL A.
QoS support for Skype for Business.
For more information, see the ExpressRoute FAQ.

Features
Layer 3 connectivity
Microsoft uses industry standard dynamic routing protocol (BGP ) to exchange routes between your on-premises
network, your instances in Azure, and Microsoft public addresses. We establish multiple BGP sessions with your
network for different traffic profiles. More details can be found in the ExpressRoute circuit and routing domains
article.
Redundancy
Each ExpressRoute circuit consists of two connections to two Microsoft Enterprise edge routers (MSEEs) from the
connectivity provider/your network edge. Microsoft requires dual BGP connection from the connectivity
provider/your side – one to each MSEE. You may choose not to deploy redundant devices/Ethernet circuits at your
end. However, connectivity providers use redundant devices to ensure that your connections are handed off to
Microsoft in a redundant manner. A redundant Layer 3 connectivity configuration is a requirement for our SL A to
be valid.
Connectivity to Microsoft cloud services
ExpressRoute connections enable access to the following services:
Microsoft Azure services
Microsoft Office 365 services
Microsoft Dynamics 365

NOTE
Software as a Service offerings, like Office 365 and Dynamics 365, were created to be accessed securely and reliably via the
Internet. Because of this, we recommend ExpressRoute for these applications only for specific scenarios. For information
about using ExpressRoute to access Office 365, visit Azure ExpressRoute for Office 365.

For a detailed list of services supported over ExpressRoute, visit the ExpressRoute FAQ page
Connectivity to all regions within a geopolitical region
You can connect to Microsoft in one of our peering locations and have access to all regions within the geopolitical
region.
For example, if you connected to Microsoft in Amsterdam through ExpressRoute, you have access to all Microsoft
cloud services hosted in Northern Europe and Western Europe. For an overview of the geopolitical regions, the
associated Microsoft cloud regions, and corresponding ExpressRoute peering locations, see the ExpressRoute
partners and peering locations article.
Global connectivity with ExpressRoute premium add-on
You can enable the ExpressRoute premium add-on feature to extend connectivity across geopolitical boundaries.
For example, if you are connected to Microsoft in Amsterdam through ExpressRoute, you will have access to all
Microsoft cloud services hosted in all regions across the world (national clouds are excluded). You can access
services deployed in South America or Australia the same way you access the North and West Europe regions.
Rich connectivity partner ecosystem
ExpressRoute has a constantly growing ecosystem of connectivity providers and SI partners. For the latest
information, refer to the ExpressRoute providers and locations article.
Connectivity to national clouds
Microsoft operates isolated cloud environments for special geopolitical regions and customer segments. Refer to
the ExpressRoute providers and locations page for a list of national clouds and providers.
Bandwidth options
You can purchase ExpressRoute circuits for a wide range of bandwidths. Supported bandwidths are listed below.
Be sure to check with your connectivity provider to determine the list of supported bandwidths they provide.
50 Mbps
100 Mbps
200 Mbps
500 Mbps
1 Gbps
2 Gbps
5 Gbps
10 Gbps
Dynamic scaling of bandwidth
You can increase the ExpressRoute circuit bandwidth (on a best effort basis) without having to tear down your
connections.
Flexible billing models
You can pick a billing model that works best for you. Choose between the billing models listed below. For more
information, see the ExpressRoute FAQ.
Unlimited data. The ExpressRoute circuit is charged based on a monthly fee, and all inbound and outbound
data transfer is included free of charge.
Metered data. The ExpressRoute circuit is charged based on a monthly fee. All inbound data transfer is free of
charge. Outbound data transfer is charged per GB of data transfer. Data transfer rates vary by region.
ExpressRoute premium add-on. The ExpressRoute premium is an add-on over the ExpressRoute circuit. The
ExpressRoute premium add-on provides the following capabilities:
Increased route limits for Azure public and Azure private peering from 4,000 routes to 10,000 routes.
Global connectivity for services. An ExpressRoute circuit created in any region (excluding national
clouds) will have access to resources across any other region in the world. For example, a virtual
network created in West Europe can be accessed through an ExpressRoute circuit provisioned in Silicon
Valley.
Increased number of VNet links per ExpressRoute circuit from 10 to a larger limit, depending on the
bandwidth of the circuit.

FAQ
For frequently asked questions about ExpressRoute, see the ExpressRoute FAQ.

Next steps
Learn about ExpressRoute connectivity models.
Learn about ExpressRoute connections and routing domains. See ExpressRoute circuits and routing domains.
Find a service provider. See ExpressRoute partners and peering locations.
Ensure that all prerequisites are met. See ExpressRoute prerequisites.
Refer to the requirements for Routing, NAT, and QoS.
Configure your ExpressRoute connection.
Create an ExpressRoute circuit
Configure peering for an ExpressRoute circuit
Connect a virtual network to an ExpressRoute circuit
Learn about some of the other key networking capabilities of Azure.
Quickstart: Create a virtual network using the Azure
portal
4/9/2018 • 3 minutes to read • Edit Online

A virtual network enables Azure resources, such as virtual machines (VM ), to communicate privately with each
other, and with the internet. In this quickstart, you learn how to create a virtual network. After creating a virtual
network, you deploy two VMs into the virtual network. You then connect to one VM from the internet, and
communicate privately between the two VMs.
If you don't have an Azure subscription, create a free account before you begin.

Log in to Azure
Log in to the Azure portal at https://portal.azure.com.

Create a virtual network


1. Select + Create a resource on the upper, left corner of the Azure portal.
2. Select Networking, and then select Virtual network.
3. Enter, or select, the following information, accept the defaults for the remaining settings, and then select
Create:

SETTING VALUE

Name myVirtualNetwork

Subscription Select your subscription.

Resource group Select Create new and enter myResourceGroup.

Location Select East US.


Create virtual machines
Create two VMs in the virtual network:
Create the first VM
1. Select + Create a resource found on the upper, left corner of the Azure portal.
2. Select Compute, and then select Windows Server 2016 Datacenter.
3. Enter, or select, the following information, accept the defaults for the remaining settings, and then select OK:

SETTING VALUE

Name myVm1

User name Enter a user name of your choosing.

Password Enter a password of your choosing. The password must be


at least 12 characters long and meet the defined
complexity requirements.

Subscription Select your subscription.

Resource group Select Use existing and select myResourceGroup.

Location Select East US


4. Select a size for the VM and then select Select.
5. Under Settings, accept all the defaults and then select OK.
6. Under Create of the Summary, select Create to start VM deployment. The VM takes a few minutes to
deploy.
Create the second VM
Complete steps 1-6 again, but in step 3, name the VM myVm2.

Connect to a VM from the internet


1. After myVm1 is created, connect to it. At the top of the Azure portal, enter myVm1. When myVm1 appears
in the search results, select it. Select the Connect button.

2. After selecting the Connect button, a Remote Desktop Protocol (.rdp) file is created and downloaded to
your computer.
3. Open the downloaded rdp file. If prompted, select Connect. Enter the user name and password you specified
when creating the VM. You may need to select More choices, then Use a different account, to specify the
credentials you entered when you created the VM.
4. Select OK.
5. You may receive a certificate warning during the sign-in process. If you receive the warning, select Yes or
Continue, to proceed with the connection.

Communicate between VMs


1. From PowerShell, enter ping myvm2 . Ping fails, because ping uses the Internet Control Message Protocol
(ICMP ), and ICMP is not allowed through the Windows firewall, by default.
2. To allow myVm2 to ping myVm1 in a later step, enter the following command from PowerShell, which
allows ICMP inbound through the Windows firewall:

New-NetFirewallRule –DisplayName “Allow ICMPv4-In” –Protocol ICMPv4

3. Close the remote desktop connection to myVm1.


4. Complete the steps in Connect to a VM from the internet again, but connect to myVm2. From a command
prompt, enter ping myvm1 .
You receive replies from myVm1, because you allowed ICMP through the Windows firewall on the myVm1
VM in a previous step.
5. Close the remote desktop connection to myVm2.

Clean up resources
When no longer needed, delete the resource group and all of the resources it contains:
1. Enter myResourceGroup in the Search box at the top of the portal. When you see myResourceGroup in the
search results, select it.
2. Select Delete resource group.
3. Enter myResourceGroup for TYPE THE RESOURCE GROUP NAME: and select Delete.

Next steps
In this quickstart, you created a default virtual network and two VMs. You connected to one VM from the internet
and communicated privately between the VM and another VM. To learn more about virtual network settings, see
Manage a virtual network.
By default, Azure allows unrestricted private communication between virtual machines, but only allows inbound
remote desktop connections to Windows VMs from the internet. To learn how to allow or restrict different types of
network communication to and from VMs, advance to the Filter network traffic tutorial.
2 minutes to read
Create an application gateway using the Azure portal
5/1/2018 • 3 minutes to read • Edit Online

You can use the Azure portal to create or manage application gateways. This quickstart shows you how to create
network resources, backend servers, and an application gateway.
If you don't have an Azure subscription, create a free account before you begin.

Log in to Azure
Log in to the Azure portal at http://portal.azure.com

Create an application gateway


A virtual network is needed for communication between the resources that you create. Two subnets are created in
this example: one for the application gateway, and the other for the backend servers. You can create a virtual
network at the same time that you create the application gateway.
1. Click New found on the upper left-hand corner of the Azure portal.
2. Select Networking and then select Application Gateway in the Featured list.
3. Enter these values for the application gateway:
myAppGateway - for the name of the application gateway.
myResourceGroupAG - for the new resource group.
4. Accept the default values for the other settings and then click OK.
5. Click Choose a virtual network, click Create new, and then enter these values for the virtual network:
myVNet - for the name of the virtual network.
10.0.0.0/16 - for the virtual network address space.
myAGSubnet - for the subnet name.
10.0.0.0/24 - for the subnet address space.

6. Click OK to create the virtual network and subnet.


7. Click Choose a public IP address, click Create new, and then enter the name of the public IP address. In this
example, the public IP address is named myAGPublicIPAddress. Accept the default values for the other settings
and then click OK.
8. Accept the default values for the Listener configuration, leave the Web application firewall disabled, and then
click OK.
9. Review the settings on the summary page, and then click OK to create the virtual network, the public IP
address, and the application gateway. It may take several minutes for the application gateway to be created, wait
until the deployment finishes successfully before moving on to the next section.
Add a subnet
1. Click All resources in the left-hand menu, and then click myVNet from the resources list.
2. Click Subnets, and then click Subnet.
3. Enter myBackendSubnet for the name of the subnet and then click OK.

Create backend servers


In this example, you create two virtual machines to be used as backend servers for the application gateway. You
also install IIS on the virtual machines to verify that the application gateway was successfully created.
Create a virtual machine
1. Click New.
2. Click Compute and then select Windows Server 2016 Datacenter in the Featured list.
3. Enter these values for the virtual machine:
myVM - for the name of the virtual machine.
azureuser - for the administrator user name.
Azure123456! for the password.
Select Use existing, and then select myResourceGroupAG.
4. Click OK.
5. Select DS1_V2 for the size of the virtual machine, and click Select.
6. Make sure that myVNet is selected for the virtual network and the subnet is myBackendSubnet.
7. Click Disabled to disable boot diagnostics.
8. Click OK, review the settings on the summary page, and then click Create.
Install IIS
1. Open the interactive shell and make sure that it is set to PowerShell.
2. Run the following command to install IIS on the virtual machine:

Set-AzureRmVMExtension `
-ResourceGroupName myResourceGroupAG `
-ExtensionName IIS `
-VMName myVM `
-Publisher Microsoft.Compute `
-ExtensionType CustomScriptExtension `
-TypeHandlerVersion 1.4 `
-SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content
-Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
-Location EastUS

3. Create a second virtual machine and install IIS using the steps that you just finished. Enter myVM2 for its
name and for VMName in Set-AzureRmVMExtension.
Add backend servers
1. Click All resources, and then click myAppGateway.
2. Click Backend pools. A default pool was automatically created with the application gateway. Click
appGatewayBackendPool.
3. Click Add target to add each virtual machine that you created to the backend pool.
4. Click Save.

Test the application gateway


1. Find the public IP address for the application gateway on the Overview screen. Click All resources and
then click myAGPublicIPAddress.

2. Copy the public IP address, and then paste it into the address bar of your browser.

Clean up resources
When no longer needed, delete the resource group, application gateway, and all related resources. To do so, select
the resource group that contains the application gateway and click Delete.

Next steps
In this quickstart, you created a resource group, network resources, and backend servers. You then used those
resources to create an application gateway. To learn more about application gateways and their associated
resources, continue to the how -to articles.
Create an application gateway with a web application
firewall using the Azure portal
5/1/2018 • 4 minutes to read • Edit Online

You can use the Azure portal to create an application gateway with a web application firewall (WAF ). The WAF uses
OWASP rules to protect your application. These rules include protection against attacks such as SQL injection,
cross-site scripting attacks, and session hijacks.
In this article, you learn how to:
Create an application gateway with WAF enabled
Create the virtual machines used as backend servers
Create a storage account and configure diagnostics

Log in to Azure
Log in to the Azure portal at http://portal.azure.com

Create an application gateway


A virtual network is needed for communication between the resources that you create. Two subnets are created in
this example: one for the application gateway, and the other for the backend servers. You can create a virtual
network at the same time that you create the application gateway.
1. Click New found on the upper left-hand corner of the Azure portal.
2. Select Networking and then select Application Gateway in the Featured list.
3. Enter these values for the application gateway:
myAppGateway - for the name of the application gateway.
myResourceGroupAG - for the new resource group.
Select WAF for the tier of the application gateway.
4. Accept the default values for the other settings and then click OK.
5. Click Choose a virtual network, click Create new, and then enter these values for the virtual network:
myVNet - for the name of the virtual network.
10.0.0.0/16 - for the virtual network address space.
myAGSubnet - for the subnet name.
10.0.0.0/24 - for the subnet address space.
6. Click OK to create the virtual network and subnet.
7. Click Choose a public IP address, click Create new, and then enter the name of the public IP address. In this
example, the public IP address is named myAGPublicIPAddress. Accept the default values for the other settings
and then click OK.
8. Accept the default values for the Listener configuration, leave the Web application firewall disabled, and then
click OK.
9. Review the settings on the summary page, and then click OK to create network resources and the application
gateway. It may take several minutes for the application gateway to be created, wait until the deployment
finishes successfully before moving on to the next section.
Add a subnet
1. Click All resources in the left-hand menu, and then click myVNet from the resources list.
2. Click Subnets, and then click Subnet.

3. Enter myBackendSubnet for the name of the subnet and then click OK.

Create backend servers


In this example, you create two virtual machines to be used as backend servers for the application gateway. You
also install IIS on the virtual machines to verify that the application gateway was successfully created.
Create a virtual machine
1. Click New.
2. Click Compute and then select Windows Server 2016 Datacenter in the Featured list.
3. Enter these values for the virtual machine:
myVM - for the name of the virtual machine.
azureuser - for the administrator user name.
Azure123456! for the password.
Select Use existing, and then select myResourceGroupAG.
4. Click OK.
5. Select DS1_V2 for the size of the virtual machine, and click Select.
6. Make sure that myVNet is selected for the virtual network and the subnet is myBackendSubnet.
7. Click Disabled to disable boot diagnostics.
8. Click OK, review the settings on the summary page, and then click Create.
Install IIS
1. Open the interactive shell and make sure that it is set to PowerShell.

2. Run the following command to install IIS on the virtual machine:

Set-AzureRmVMExtension `
-ResourceGroupName myResourceGroupAG `
-ExtensionName IIS `
-VMName myVM `
-Publisher Microsoft.Compute `
-ExtensionType CustomScriptExtension `
-TypeHandlerVersion 1.4 `
-SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content
-Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}' `
-Location EastUS

3. Create a second virtual machine and install IIS using the steps that you just finished. Enter myVM2 for its
name and for VMName in Set-AzureRmVMExtension.
Add backend servers
1. Click All resources, and then click myAppGateway.
2. Click Backend pools. A default pool was automatically created with the application gateway. Click
appGateayBackendPool.
3. Click Add target to add each virtual machine that you created to the backend pool.
4. Click Save.

Create a storage account and configure diagnostics


Create a storage account
In this tutorial, the application gateway uses a storage account to store data for detection and prevention purposes.
You could also use Log Analytics or Event Hub to record data.
1. Click New found on the upper left-hand corner of the Azure portal.
2. Select Storage, and then select Storage account - blob, file, table, queue.
3. Enter the name of the storage account, select Use existing for the resource group, and then select
myResourceGroupAG. In this example, the storage account name is myagstore1. Accept the default values for
the other settings and then click Create.

Configure diagnostics
Configure diagnostics to record data into the ApplicationGatewayAccessLog, ApplicationGatewayPerformanceLog,
and ApplicationGatewayFirewallLog logs.
1. In the left-hand menu, click All resources, and then select myAppGateway.
2. Under Monitoring, click Diagnostics logs.
3. Click Add diagnostics setting.
4. Enter myDiagnosticsSettings as the name for the diagnostics settings.
5. Select Archive to a storage account, and then click Configure to select the myagstore1 storage account that
you previously created.
6. Select the application gateway logs to collect and retain.
7. Click Save.
Test the application gateway
1. Find the public IP address for the application gateway on the Overview screen. Click All resources and then
click myAGPublicIPAddress.

2. Copy the public IP address, and then paste it into the address bar of your browser.
Next steps
In this article, you learned how to:
Create an application gateway with WAF enabled
Create the virtual machines used as backend servers
Create a storage account and configure diagnostics
To learn more about application gateways and their associated resources, continue to the how -to articles.
Configure the geographic traffic routing method
using Traffic Manager
2/16/2018 • 3 minutes to read • Edit Online

The Geographic traffic routing method allows you to direct traffic to specific endpoints based on the geographic
location where the requests originate. This tutorial shows you how to create a Traffic Manager profile with this
routing method and configure the endpoints to receive traffic from specific geographies.

Create a Traffic Manager Profile


1. From a browser, sign in to the Azure portal. If you don’t already have an account, you can sign up for a free one-
month trial.
2. Click Create a resource > Networking > Traffic Manager profile > Create.
3. In the Create Traffic Manager profile:
a. Provide a name for your profile. This name needs to be unique within the trafficmanager.net zone. To
access your Traffic Manager profile, you use the DNS name .trafficmanager.net.
b. Select the Geographic routing method.
c. Select the subscription you want to create this profile under.
d. Use an existing resource group or create a new resource group to place this profile under. If you choose
to create a new resource group, use the Resource Group location dropdown to specify the location of
the resource group. This setting refers to the location of the resource group, and has no impact on the
Traffic Manager profile that's deployed globally.
e. After you click Create, your Traffic Manager profile is created and deployed globally.

Add endpoints
1. Search for the Traffic Manager profile name you created in the portal’s search bar and click on the result when it
is shown.
2. Navigate to Settings -> Endpoints in Traffic Manager.
3. Click Add to show the Add Endpoint.
4. Click Add and in the Add endpoint that is displayed, complete as follows:
5. Select Type depending upon the type of endpoint you are adding. For geographic routing profiles used in
production, we strongly recommend using nested endpoint types containing a child profile with more than one
endpoint. For more details, see FAQs about geographic traffic routing methods.
6. Provide a Name by which you want to recognize this endpoint.
7. Certain fields on this page depend on the type of endpoint you are adding:
a. If you are adding an Azure endpoint, select the Target resource type and the Target based on the
resource you want to direct traffic to
b. If you are adding an External endpoint, provide the Fully-qualified domain name (FQDN ) for your
endpoint.
c. If you are adding a Nested endpoint, select the Target resource that corresponds to the child profile
you want to use and specify the Minimum child endpoints count.
8. In the Geo-mapping section, use the drop down to add the regions from where you want traffic to be sent to
this endpoint. You must add at least one region, and you can have multiple regions mapped.
9. Repeat this for all endpoints you want to add under this profile

Use the Traffic Manager profile


1. In the portal’s search bar, search for the Traffic Manager profile name that you created in the preceding
section and click on the traffic manager profile in the results that the displayed.
2. Click Overview.
3. The Traffic Manager profile displays the DNS name of your newly created Traffic Manager profile. This can be
used by any clients (for example, by navigating to it using a web browser) to get routed to the right endpoint as
determined by the routing type. In the case of geographic routing, Traffic Manager looks at the source IP of the
incoming request and determines the region from which it is originating. If that region is mapped to an
endpoint, traffic is routed to there. If this region is not mapped to an endpoint, then Traffic Manager returns a
NODATA query response.
Next steps
Learn more about Geographic traffic routing method.
Learn how to test Traffic Manager settings.
2 minutes to read
Create an application gateway with an internal load
balancer (ILB)
5/25/2018 • 6 minutes to read • Edit Online

Azure Application Gateway can be configured with an Internet-facing VIP or with an internal endpoint that is not
exposed to the Internet, also known as an internal load balancer (ILB ) endpoint. Configuring the gateway with an
ILB is useful for internal line-of-business applications that are not exposed to the Internet. It's also useful for
services and tiers within a multi-tier application that sit in a security boundary that is not exposed to the Internet
but still require round-robin load distribution, session stickiness, or Secure Sockets Layer (SSL ) termination.
This article walks you through the steps to configure an application gateway with an ILB.

Before you begin


1. Install the latest version of the Azure PowerShell cmdlets by using the Web Platform Installer. You can
download and install the latest version from the Windows PowerShell section of the Downloads page.
2. You create a virtual network and a subnet for Application Gateway. Make sure that no virtual machines or cloud
deployments are using the subnet. Application Gateway must be by itself in a virtual network subnet.
3. The servers that you configure to use the application gateway must exist or have their endpoints created either
in the virtual network or with a public IP/VIP assigned.

What is required to create an application gateway?


Back-end server pool: The list of IP addresses of the back-end servers. The IP addresses listed should either
belong to the virtual network but in a different subnet for the application gateway or should be a public IP/VIP.
Back-end server pool settings: Every pool has settings like port, protocol, and cookie-based affinity. These
settings are tied to a pool and are applied to all servers within the pool.
Front-end port: This port is the public port that is opened on the application gateway. Traffic hits this port, and
then gets redirected to one of the back-end servers.
Listener: The listener has a front-end port, a protocol (Http or Https, these are case-sensitive), and the SSL
certificate name (if configuring SSL offload).
Rule: The rule binds the listener and the back-end server pool and defines which back-end server pool the
traffic should be directed to when it hits a particular listener. Currently, only the basic rule is supported. The
basic rule is round-robin load distribution.

Create an application gateway


The difference between using Azure Classic and Azure Resource Manager is the order in which you create the
application gateway and the items that need to be configured. With Resource Manager, all items that make an
application gateway is configured individually and then put together to create the application gateway resource.
Here are the steps that are needed to create an application gateway:
1. Create a resource group for Resource Manager
2. Create a virtual network and a subnet for the application gateway
3. Create an application gateway configuration object
4. Create an application gateway resource
Create a resource group for Resource Manager
Make sure that you switch PowerShell mode to use the Azure Resource Manager cmdlets. More info is available at
Using Windows PowerShell with Resource Manager.
Step 1

Connect-AzureRmAccount

Step 2
Check the subscriptions for the account.

Get-AzureRmSubscription

You are prompted to authenticate with your credentials.


Step 3
Choose which of your Azure subscriptions to use.

Select-AzureRmSubscription -Subscriptionid "GUID of subscription"

Step 4
Create a new resource group (skip this step if you're using an existing resource group).

New-AzureRmResourceGroup -Name appgw-rg -location "West US"

Azure Resource Manager requires that all resource groups specify a location. This is used as the default location for
resources in that resource group. Make sure that all commands to create an application gateway uses the same
resource group.
In the preceding example, we created a resource group called "appgw -rg" and location "West US".

Create a virtual network and a subnet for the application gateway


The following example shows how to create a virtual network by using Resource Manager:
Step 1

$subnetconfig = New-AzureRmVirtualNetworkSubnetConfig -Name subnet01 -AddressPrefix 10.0.0.0/24

This step assigns the address range 10.0.0.0/24 to a subnet variable to be used to create a virtual network.
Step 2

$vnet = New-AzureRmVirtualNetwork -Name appgwvnet -ResourceGroupName appgw-rg -Location "West US" -


AddressPrefix 10.0.0.0/16 -Subnet $subnetconfig

This step creates a virtual network named "appgwvnet" in resource group "appgw -rg" for the West US region
using the prefix 10.0.0.0/16 with subnet 10.0.0.0/24.
Step 3
$subnet = $vnet.subnets[0]

This step assigns the subnet object to variable $subnet for the next steps.

Create an application gateway configuration object


Step 1

$gipconfig = New-AzureRmApplicationGatewayIPConfiguration -Name gatewayIP01 -Subnet $subnet

This step creates an application gateway IP configuration named "gatewayIP01". When Application Gateway starts,
it picks up an IP address from the subnet configured and route network traffic to the IP addresses in the back-end
IP pool. Keep in mind that each instance takes one IP address.
Step 2

$pool = New-AzureRmApplicationGatewayBackendAddressPool -Name pool01 -BackendIPAddresses


10.1.1.8,10.1.1.9,10.1.1.10

This step configures the back-end IP address pool named "pool01" with IP addresses "10.1.1.8, 10.1.1.9, 10.1.1.10".
Those are the IP addresses that receive the network traffic that comes from the front-end IP endpoint. You replace
the preceding IP addresses to add your own application IP address endpoints.
Step 3

$poolSetting = New-AzureRmApplicationGatewayBackendHttpSettings -Name poolsetting01 -Port 80 -Protocol Http -


CookieBasedAffinity Disabled

This step configures application gateway setting "poolsetting01" for the load balanced network traffic in the back-
end pool.
Step 4

$fp = New-AzureRmApplicationGatewayFrontendPort -Name frontendport01 -Port 80

This step configures the front-end IP port named "frontendport01" for the ILB.
Step 5

$fipconfig = New-AzureRmApplicationGatewayFrontendIPConfig -Name fipconfig01 -Subnet $subnet

This step creates the front-end IP configuration called "fipconfig01" and associates it with a private IP from the
current virtual network subnet.
Step 6

$listener = New-AzureRmApplicationGatewayHttpListener -Name listener01 -Protocol Http -FrontendIPConfiguration


$fipconfig -FrontendPort $fp

This step creates the listener called "listener01" and associates the front-end port to the front-end IP configuration.
Step 7
$rule = New-AzureRmApplicationGatewayRequestRoutingRule -Name rule01 -RuleType Basic -BackendHttpSettings
$poolSetting -HttpListener $listener -BackendAddressPool $pool

This step creates the load balancer routing rule called "rule01" that configures the load balancer behavior.
Step 8

$sku = New-AzureRmApplicationGatewaySku -Name Standard_Small -Tier Standard -Capacity 2

This step configures the instance size of the application gateway.

NOTE
The default value for InstanceCount is 2, with a maximum value of 10. The default value for GatewaySize is Medium. You can
choose between Standard_Small, Standard_Medium, and Standard_Large.

Create an application gateway by using New-AzureApplicationGateway


Creates an application gateway with all configuration items from the preceding steps. In this example, the
application gateway is called "appgwtest".

$appgw = New-AzureRmApplicationGateway -Name appgwtest -ResourceGroupName appgw-rg -Location "West US" -


BackendAddressPools $pool -BackendHttpSettingsCollection $poolSetting -FrontendIpConfigurations $fipconfig -
GatewayIpConfigurations $gipconfig -FrontendPorts $fp -HttpListeners $listener -RequestRoutingRules $rule -Sku
$sku

This step creates an application gateway with all configuration items from the preceding steps. In the example, the
application gateway is called "appgwtest".

Delete an application gateway


To delete an application gateway, you need to do the following steps in order:
1. Use the Stop-AzureRmApplicationGateway cmdlet to stop the gateway.
2. Use the Remove-AzureRmApplicationGateway cmdlet to remove the gateway.
3. Verify that the gateway has been removed by using the Get-AzureApplicationGateway cmdlet.
Step 1
Get the application gateway object and associate it to a variable "$getgw".

$getgw = Get-AzureRmApplicationGateway -Name appgwtest -ResourceGroupName appgw-rg

Step 2
Use Stop-AzureRmApplicationGateway to stop the application gateway. This sample shows the
Stop-AzureRmApplicationGateway cmdlet on the first line, followed by the output.

Stop-AzureRmApplicationGateway -ApplicationGateway $getgw


VERBOSE: 9:49:34 PM - Begin Operation: Stop-AzureApplicationGateway
VERBOSE: 10:10:06 PM - Completed Operation: Stop-AzureApplicationGateway
Name HTTP Status Code Operation ID Error
---- ---------------- ------------ ----
Successful OK ce6c6c95-77b4-2118-9d65-e29defadffb8

Once the application gateway is in a stopped state, use the Remove-AzureRmApplicationGateway cmdlet to remove the
service.

Remove-AzureRmApplicationGateway -Name appgwtest -ResourceGroupName appgw-rg -Force

VERBOSE: 10:49:34 PM - Begin Operation: Remove-AzureApplicationGateway


VERBOSE: 10:50:36 PM - Completed Operation: Remove-AzureApplicationGateway
Name HTTP Status Code Operation ID Error
---- ---------------- ------------ ----
Successful OK 055f3a96-8681-2094-a304-8d9a11ad8301

NOTE
The -force switch can be used to suppress the remove confirmation message.

To verify that the service has been removed, you can use the Get-AzureRmApplicationGateway cmdlet. This step is not
required.

Get-AzureRmApplicationGateway -Name appgwtest -ResourceGroupName appgw-rg

VERBOSE: 10:52:46 PM - Begin Operation: Get-AzureApplicationGateway

Get-AzureApplicationGateway : ResourceNotFound: The gateway does not exist.

Next steps
If you want to configure SSL offload, see Configure an application gateway for SSL offload.
If you want to configure an application gateway to use with an ILB, see Create an application gateway with an
internal load balancer (ILB ).
If you want more information about load balancing options in general, see:
Azure Load Balancer
Azure Traffic Manager
2 minutes to read
Configure a VNet-to-VNet VPN gateway connection
using the Azure portal
4/18/2018 • 18 minutes to read • Edit Online

This article helps you connect virtual networks by using the VNet-to-VNet connection type. The virtual networks
can be in the same or different regions, and from the same or different subscriptions. When connecting VNets
from different subscriptions, the subscriptions do not need to be associated with the same Active Directory tenant.
The steps in this article apply to the Resource Manager deployment model and use the Azure portal. You can also
create this configuration using a different deployment tool or deployment model by selecting a different option
from the following list:

About connecting VNets


There are multiple ways to connect VNets. The sections below describe different ways to connect virtual networks.
VNet-to -VNet
Configuring a VNet-to-VNet connection is a good way to easily connect VNets. Connecting a virtual network to
another virtual network using the VNet-to-VNet connection type (VNet2VNet) is similar to creating a Site-to-Site
IPsec connection to an on-premises location. Both connectivity types use a VPN gateway to provide a secure
tunnel using IPsec/IKE, and both function the same way when communicating. The difference between the
connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection,
you do not see the local network gateway address space. It is automatically created and populated. If you update
the address space for one VNet, the other VNet automatically knows to route to the updated address space.
Creating a VNet-to-VNet connection is typically faster and easier than creating a Site-to-Site connection between
VNets.
Site -to -Site (IPsec)
If you are working with a complicated network configuration, you may prefer to connect your VNets using the
Site-to-Site steps instead. When you use the Site-to-Site IPsec steps, you create and configure the local network
gateways manually. The local network gateway for each VNet treats the other VNet as a local site. This lets you
specify additional address space for the local network gateway in order to route traffic. If the address space for a
VNet changes, you need to update the corresponding local network gateway to reflect that. It does not
automatically update.
VNet peering
You may want to consider connecting your VNets using VNet Peering. VNet peering does not use a VPN gateway
and has different constraints. Additionally, VNet peering pricing is calculated differently than VNet-to-VNet VPN
Gateway pricing. For more information, see VNet peering.

Why create a VNet-to-VNet connection?


You may want to connect virtual networks using a VNet-to-VNet connection for the following reasons:
Cross region geo-redundancy and geo-presence
You can set up your own geo-replication or synchronization with secure connectivity without going over
Internet-facing endpoints.
With Azure Traffic Manager and Load Balancer, you can set up highly available workload with geo-
redundancy across multiple Azure regions. One important example is to set up SQL Always On with
Availability Groups spreading across multiple Azure regions.
Regional multi-tier applications with isolation or administrative boundary
Within the same region, you can set up multi-tier applications with multiple virtual networks connected
together due to isolation or administrative requirements.
VNet-to-VNet communication can be combined with multi-site configurations. This lets you establish network
topologies that combine cross-premises connectivity with inter-virtual network connectivity, as shown in the
following diagram:

This article helps you connect VNets using the VNet-to-VNet connection type. When using these steps as an
exercise, you can use the example settings values. In the example, the virtual networks are in the same
subscription, but in different resource groups. If your VNets are in different subscriptions, you can't create the
connection in the portal. You can use PowerShell or CLI. For more information about VNet-to-VNet connections,
see the VNet-to-VNet FAQ at the end of this article.
Example settings
Values for TestVNet1:
VNet Name: TestVNet1
Address space: 10.11.0.0/16
Subscription: Select the subscription you want to use
Resource Group: TestRG1
Location: East US
Subnet Name: FrontEnd
Subnet Address range: 10.11.0.0/24
Gateway Subnet name: GatewaySubnet (this will auto-fill in the portal)
Gateway Subnet address range: 10.11.255.0/27
DNS Server: Use the IP address of your DNS Server
Virtual Network Gateway Name: TestVNet1GW
Gateway Type: VPN
VPN type: Route-based
SKU: Select the Gateway SKU you want to use
Public IP address name: TestVNet1GWIP
Connection Name: TestVNet1toTestVNet4
Shared key: You can create the shared key yourself. For this example, we'll use abc123. The important thing is
that when you create the connection between the VNets, the value must match.
Values for TestVNet4:
VNet Name: TestVNet4
Address space: 10.41.0.0/16
Subscription: Select the subscription you want to use
Resource Group: TestRG4
Location: West US
Subnet Name: FrontEnd
Subnet Address range: 10.41.0.0/24
GatewaySubnet name: GatewaySubnet (this will auto-fill in the portal)
GatewaySubnet address range: 10.41.255.0/27
DNS Server: Use the IP address of your DNS Server
Virtual Network Gateway Name: TestVNet4GW
Gateway Type: VPN
VPN type: Route-based
SKU: Select the Gateway SKU you want to use
Public IP address name: TestVNet4GWIP
Connection Name: TestVNet4toTestVNet1
Shared key: You can create the shared key yourself. For this example, we'll use abc123. The important thing is
that when you create the connection between the VNets, the value must match.

1. Create and configure TestVNet1


If you already have a VNet, verify that the settings are compatible with your VPN gateway design. Pay particular
attention to any subnets that may overlap with other networks. If you have overlapping subnets, your connection
won't work properly. If your VNet is configured with the correct settings, you can begin the steps in the Specify a
DNS server section.
To create a virtual network
To create a VNet in the Resource Manager deployment model by using the Azure portal, follow the steps below.
The screenshots are provided as examples. Be sure to replace the values with your own. For more information
about working with virtual networks, see the Virtual Network Overview.

NOTE
In order for this VNet to connect to an on-premises location you need to coordinate with your on-premises network
administrator to carve out an IP address range that you can use specifically for this virtual network. If a duplicate address
range exists on both sides of the VPN connection, traffic does not route the way you may expect it to. Additionally, if you
want to connect this VNet to another VNet, the address space cannot overlap with other VNet. Take care to plan your
network configuration accordingly.

1. From a browser, navigate to the Azure portal and, if necessary, sign in with your Azure account.
2. Click +. In the Search the marketplace field, type "Virtual Network". Locate Virtual Network from the
returned list and click to open the Virtual Network page.
3. Near the bottom of the Virtual Network page, from the Select a deployment model list, select Resource
Manager, and then click Create.

4. On the Create virtual network page, configure the VNet settings. When you fill in the fields, the red
exclamation mark becomes a green check mark when the characters entered in the field are valid. There
may be values that are auto-filled. If so, replace the values with your own. The Create virtual network
page looks similar to the following example:

5. Name: Enter the name for your virtual network.


6. Address space: Enter the address space. If you have multiple address spaces to add, add your first address
space. You can add additional address spaces later, after creating the VNet.
7. Subscription: Verify that the Subscription listed is the correct one. You can change subscriptions by using the
drop-down.
8. Resource group: Select an existing resource group, or create a new one by typing a name for your new
resource group. If you are creating a new group, name the resource group according to your planned
configuration values. For more information about resource groups, visit Azure Resource Manager Overview.
9. Location: Select the location for your VNet. The location determines where the resources that you deploy to
this VNet will reside.
10. Subnet: Add the subnet name and subnet address range. You can add additional subnets later, after creating
the VNet.
11. Select Pin to dashboard if you want to be able to find your VNet easily on the dashboard, and then click
Create.

2. Add additional address space and create subnets


You can add additional address space and create subnets once your VNet has been created.
To add additional address space
1. To add additional address space, under the Settings section on your virtual network page, click Address space
to open the Address space page.
2. Add the additional address space, and then click Save at the top of the page.

To create additional subnets


1. To create subnets, in the Settings section of your virtual network page, click Subnets to open the Subnets
page.
2. On the Subnets page, click +Subnet to open the Add subnet page. Name your new subnet and specify the
address range.

3. To save your changes, click OK at the bottom of the page.

3. Create a gateway subnet


Before creating a virtual network gateway for your virtual network, you first need to create the gateway subnet.
The gateway subnet contains the IP addresses that are used by the virtual network gateway. If possible, it's best to
create a gateway subnet using a CIDR block of /28 or /27 in order to provide enough IP addresses to
accommodate additional future configuration requirements.
If you are creating this configuration as an exercise, refer to these Example settings when creating your gateway
subnet.

IMPORTANT
When working with gateway subnets, avoid associating a network security group (NSG) to the gateway subnet. Associating a
network security group to this subnet may cause your VPN gateway to stop functioning as expected. For more information
about network security groups, see What is a network security group?

To create a gateway subnet


1. In the portal, navigate to the Resource Manager virtual network for which you want to create a virtual network
gateway.
2. In the Settings section of your VNet page, click Subnets to expand the Subnets page.
3. On the Subnets page, click +Gateway subnet to open the Add subnet page.

4. The Name for your subnet is automatically filled in with the value 'GatewaySubnet'. This value is required
in order for Azure to recognize the subnet as the gateway subnet. Adjust the auto-filled Address range
values to match your configuration requirements, then click OK at the bottom of the page to create the
subnet.

4. Specify a DNS server (optional)


DNS is not required for VNet-to-VNet connections. However, if you want to have name resolution for resources
that are deployed to your virtual network, you should specify a DNS server. This setting lets you specify the DNS
server that you want to use for name resolution for this virtual network. It does not create a DNS server.
1. On the Settings page for your virtual network, navigate to DNS Servers and click to open the DNS
servers page.
DNS Servers: Select Custom.
Add DNS server: Enter the IP address of the DNS server that you want to use for name resolution.
2. When you are done adding DNS servers, click Save at the top of the page.

5. Create a virtual network gateway


In this step, you create the virtual network gateway for your VNet. Creating a gateway can often take 45 minutes
or more, depending on the selected gateway SKU. If you are creating this configuration as an exercise, you can
refer to the Example settings.
To create a virtual network gateway
1. In the portal, on the left side, click + and type 'virtual network gateway' in search. Locate Virtual network
gateway in the search return and click the entry. On the Virtual network gateway page, click Create at the
bottom of the page to open the Create virtual network gateway page.
2. On the Create virtual network gateway page, fill in the values for your virtual network gateway.
3. On the Create virtual network gateway page, specify the values for your virtual network gateway.
Name: Name your gateway. This is not the same as naming a gateway subnet. It's the name of the
gateway object you are creating.
Gateway type: Select VPN. VPN gateways use the virtual network gateway type VPN.
VPN type: Select the VPN type that is specified for your configuration. Most configurations require a
Route-based VPN type.
SKU: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the
VPN type you select. For more information about gateway SKUs, see Gateway SKUs.
Location: You may need to scroll to see Location. Adjust the Location field to point to the location
where your virtual network is located. If the location is not pointing to the region where your virtual
network resides, when you select a virtual network in the next step, it will not appear in the drop-down
list.
Virtual network: Choose the virtual network to which you want to add this gateway. Click Virtual
network to open the 'Choose a virtual network' page. Select the VNet. If you don't see your VNet, make
sure the Location field is pointing to the region in which your virtual network is located.
Gateway subnet address range: You will only see this setting if you did not previously create a
gateway subnet for your virtual network. If you previously created a valid gateway subnet, this setting
will not appear.
First IP configuration: The 'Choose public IP address' page creates a public IP address object that
gets associated to the VPN gateway. The public IP address is dynamically assigned to this object
when the VPN gateway is created. VPN Gateway currently only supports Dynamic Public IP address
allocation. However, this does not mean that the IP address changes after it has been assigned to
your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and
re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of
your VPN gateway.
First, click Create gateway IP configuration to open the 'Choose public IP address' page, then
click +Create new to open the 'Create public IP address' page.
Next, input a Name for your public IP address. Leave the SKU as Basic unless there is a
specific reason to change it to something else, then click OK at the bottom of this page to save
your changes.

4. Verify the settings. You can select Pin to dashboard at the bottom of the page if you want your gateway to
appear on the dashboard.
5. Click Create to begin creating the VPN gateway. The settings are validated and you'll see the "Deploying
Virtual network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need
to refresh your portal page to see the completed status.
After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in
the portal. The gateway appears as a connected device. You can click the connected device (your virtual network
gateway) to view more information.

6. Create and configure TestVNet4


Once you've configured TestVNet1, create TestVNet4 by repeating the previous steps, replacing the values with
those of TestVNet4. You don't need to wait until the virtual network gateway for TestVNet1 has finished creating
before configuring TestVNet4. If you are using your own values, make sure that the address spaces don't overlap
with any of the VNets that you want to connect to.

7. Configure the TestVNet1 gateway connection


When the virtual network gateways for both TestVNet1 and TestVNet4 have completed, you can create your
virtual network gateway connections. In this section, you create a connection from VNet1 to VNet4. These steps
work only for VNets in the same subscription. If your VNets are in different subscriptions, you must use
PowerShell to make the connection. See the PowerShell article. However, if your VNets are in different resource
groups in the same subscription, you can connect them using the portal.
1. In All resources, navigate to the virtual network gateway for your VNet. For example, TestVNet1GW.
Click TestVNet1GW to open the virtual network gateway page.

2. Click +Add to open the Add connection page.

3. On the Add connection page, in the name field, type a name for your connection. For example,
TestVNet1toTestVNet4.
4. For Connection type, select VNet-to-VNet from the dropdown.
5. The First virtual network gateway field value is automatically filled in because you are creating this
connection from the specified virtual network gateway.
6. The Second virtual network gateway field is the virtual network gateway of the VNet that you want to
create a connection to. Click Choose another virtual network gateway to open the Choose virtual
network gateway page.
7. View the virtual network gateways that are listed on this page. Notice that only virtual network gateways that
are in your subscription are listed. If you want to connect to a virtual network gateway that is not in your
subscription, please use the PowerShell article.
8. Click the virtual network gateway that you want to connect to.
9. In the Shared key field, type a shared key for your connection. You can generate or create this key yourself. In
a site-to-site connection, the key you use would be exactly the same for your on-premises device and your
virtual network gateway connection. The concept is similar here, except that rather than connecting to a VPN
device, you are connecting to another virtual network gateway.
10. Click OK at the bottom of the page to save your changes.

8. Configure the TestVNet4 gateway connection


Next, create a connection from TestVNet4 to TestVNet1. In the portal, locate the virtual network gateway
associated with TestVNet4. Follow the steps from the previous section, replacing the values to create a connection
from TestVNet4 to TestVNet1. Make sure that you use the same shared key.

9. Verify your connections


Locate the virtual network gateway in the portal. On the virtual network gateway page, click Connections to view
the connections page for the virtual network gateway. Once the connection is established, you see the Status
values change to Succeeded and Connected. You can double-click a connection to open the Essentials page and
view more information.

When data begins flowing, you see values for Data in and Data out.

To add additional connections


If you want to add additional connections, navigate to the virtual network gateway that you want to create the
connection from, then click Connections. You can create another VNet-to-VNet connection, or create an IPsec
Site-to-Site connection to an on-premises location. Be sure to adjust the Connection type to match the type of
connection you want to create. Before creating additional connections, verify that the address space for your
virtual network does not overlap with any of the address spaces that you want to connect to. For steps to create a
Site-to-Site connection, see Create a Site-to-Site connection.

VNet-to-VNet FAQ
View the FAQ details for additional information about VNet-to-VNet connections.
The VNet-to-VNet FAQ applies to VPN Gateway connections. If you are looking for VNet Peering, see Virtual
Network Peering
Does Azure charge for traffic between VNets?
VNet-to-VNet traffic within the same region is free for both directions when using a VPN gateway connection.
Cross region VNet-to-VNet egress traffic is charged with the outbound inter-VNet data transfer rates based on
the source regions. Refer to the VPN Gateway pricing page for details. If you are connecting your VNets using
VNet Peering, rather than VPN Gateway, see the Virtual Network pricing page.
Does VNet-to -VNet traffic travel across the Internet?
No. VNet-to-VNet traffic travels across the Microsoft Azure backbone, not the Internet.
Can I establish a VNet-to -VNet connection across AAD Tenants?
Yes, VNet-to-VNet connections using Azure VPN gateways work across AAD Tenants.
Is VNet-to -VNet traffic secure?
Yes, it is protected by IPsec/IKE encryption.
Do I need a VPN device to connect VNets together?
No. Connecting multiple Azure virtual networks together doesn't require a VPN device unless cross-premises
connectivity is required.
Do my VNets need to be in the same region?
No. The virtual networks can be in the same or different Azure regions (locations).
If the VNets are not in the same subscription, do the subscriptions need to be associated with the same AD
tenant?
No.
Can I use VNet-to -VNet to connect virtual networks in separate Azure instances?
No. VNet-to-VNet supports connecting virtual networks within the same Azure instance. For example, you can’t
create a connection between public Azure and the Chinese / German / US Gov Azure instances. For these
scenarios, consider using a Site-to-Site VPN connection.
Can I use VNet-to -VNet along with multi-site connections?
Yes. Virtual network connectivity can be used simultaneously with multi-site VPNs.
How many on-premises sites and virtual networks can one virtual network connect to?
See Gateway requirements table.
Can I use VNet-to -VNet to connect VMs or cloud services outside of a VNet?
No. VNet-to-VNet supports connecting virtual networks. It does not support connecting virtual machines or cloud
services that are not in a virtual network.
Can a cloud service or a load balancing endpoint span VNets?
No. A cloud service or a load balancing endpoint can't span across virtual networks, even if they are connected
together.
Can I used a PolicyBased VPN type for VNet-to -VNet or Multi-Site connections?
No. VNet-to-VNet and Multi-Site connections require Azure VPN gateways with RouteBased (previously called
Dynamic Routing) VPN types.
Can I connect a VNet with a RouteBased VPN Type to another VNet with a PolicyBased VPN type?
No, both virtual networks MUST be using route-based (previously called Dynamic Routing) VPNs.
Do VPN tunnels share bandwidth?
Yes. All VPN tunnels of the virtual network share the available bandwidth on the Azure VPN gateway and the
same VPN gateway uptime SL A in Azure.
Are redundant tunnels supported?
Redundant tunnels between a pair of virtual networks are supported when one virtual network gateway is
configured as active-active.
Can I have overlapping address spaces for VNet-to -VNet configurations?
No. You can't have overlapping IP address ranges.
Can there be overlapping address spaces among connected virtual networks and on-premises local sites?
No. You can't have overlapping IP address ranges.

Next steps
See Network Security for information about how you can limit network traffic to resources in a virtual network.
See Virtual network traffic routing for information about how Azure routes traffic between Azure, on-premises,
and Internet resources.
Create a Site-to-Site connection in the Azure portal
4/9/2018 • 19 minutes to read • Edit Online

This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your
on-premises network to the VNet. The steps in this article apply to the Resource Manager deployment model. You
can also create this configuration using a different deployment tool or deployment model by selecting a different
option from the following list:
A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network
over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-
premises that has an externally facing public IP address assigned to it. For more information about VPN gateways,
see About VPN gateway.

Before you begin


Verify that you have met the following criteria before beginning your configuration:
Make sure you have a compatible VPN device and someone who is able to configure it. For more information
about compatible VPN devices and device configuration, see About VPN Devices.
Verify that you have an externally facing public IPv4 address for your VPN device. This IP address cannot be
located behind a NAT.
If you are unfamiliar with the IP address ranges located in your on-premises network configuration, you need
to coordinate with someone who can provide those details for you. When you create this configuration, you
must specify the IP address range prefixes that Azure will route to your on-premises location. None of the
subnets of your on-premises network can over lap with the virtual network subnets that you want to connect
to.
Example values
The examples in this article use the following values. You can use these values to create a test environment, or
refer to them to better understand the examples in this article. For more information about VPN Gateway settings
in general, see About VPN Gateway Settings.
VNet Name: TestVNet1
Address Space: 10.1.0.0/16
Subscription: The subscription you want to use
Resource Group: TestRG1
Location: East US
Subnet: FrontEnd: 10.1.0.0/24, BackEnd: 10.1.1.0/24 (optional for this exercise)
Gateway Subnet name: GatewaySubnet (this will auto-fill in the portal)
Gateway Subnet address range: 10.1.255.0/27
DNS Server: 8.8.8.8 - Optional. The IP address of your DNS server.
Virtual Network Gateway Name: VNet1GW
Public IP: VNet1GWIP
VPN Type: Route-based
Connection Type: Site-to-site (IPsec)
Gateway Type: VPN
Local Network Gateway Name: Site1
Connection Name: VNet1toSite1
Shared key: For this example, we use abc123. But, you can use whatever is compatible with your VPN
hardware. The important thing is that the values match on both sides of the connection.

1. Create a virtual network


To create a VNet in the Resource Manager deployment model by using the Azure portal, follow the steps below.
Use the example values if you are using these steps as a tutorial. If you are not doing these steps as a tutorial, be
sure to replace the values with your own. For more information about working with virtual networks, see the
Virtual Network Overview.

NOTE
In order for this VNet to connect to an on-premises location you need to coordinate with your on-premises network
administrator to carve out an IP address range that you can use specifically for this virtual network. If a duplicate address
range exists on both sides of the VPN connection, traffic does not route the way you may expect it to. Additionally, if you
want to connect this VNet to another VNet, the address space cannot overlap with other VNet. Take care to plan your
network configuration accordingly.

1. From a browser, navigate to the Azure portal and sign in with your Azure account.
2. Click Create a resource. In the Search the marketplace field, type 'virtual network'. Locate Virtual network
from the returned list and click to open the Virtual Network page.
3. Near the bottom of the Virtual Network page, from the Select a deployment model list, select Resource
Manager, and then click Create. This opens the 'Create virtual network' page.
4. On the Create virtual network page, configure the VNet settings. When you fill in the fields, the red
exclamation mark becomes a green check mark when the characters entered in the field are valid.
Name: Enter the name for your virtual network. In this example, we use VNet1.
Address space: Enter the address space. If you have multiple address spaces to add, add your first
address space. You can add additional address spaces later, after creating the VNet. Make sure that the
address space that you specify does not overlap with the address space for your on-premises location.
Subscription: Verify that the subscription listed is the correct one. You can change subscriptions by
using the drop-down.
Resource group: Select an existing resource group, or create a new one by typing a name for your new
resource group. If you are creating a new group, name the resource group according to your planned
configuration values. For more information about resource groups, visit Azure Resource Manager
Overview.
Location: Select the location for your VNet. The location determines where the resources that you
deploy to this VNet will reside.
Subnet: Add the first subnet name and subnet address range. You can add additional subnets and the
gateway subnet later, after creating this VNet.
5. Select Pin to dashboard if you want to be able to find your VNet easily on the dashboard, and then click
Create. After clicking Create, you will see a tile on your dashboard that will reflect the progress of your
VNet. The tile changes as the VNet is being created.

2. Specify a DNS server


DNS is not required to create a Site-to-Site connection. However, if you want to have name resolution for
resources that are deployed to your virtual network, you should specify a DNS server. This setting lets you specify
the DNS server that you want to use for name resolution for this virtual network. It does not create a DNS server.
For more information about name resolution, see Name Resolution for VMs and role instances.
1. On the Settings page for your virtual network, navigate to DNS Servers and click to open the DNS
servers page.

DNS Servers: Select Custom.


Add DNS server: Enter the IP address of the DNS server that you want to use for name resolution.
2. When you are done adding DNS servers, click Save at the top of the page.

3. Create the gateway subnet


The virtual network gateway uses specific subnet called the gateway subnet. The gateway subnet is part of the
virtual network IP address range that you specify when configuring your virtual network. It contains the IP
addresses that the virtual network gateway resources and services use. The subnet must be named
'GatewaySubnet' in order for Azure to deploy the gateway resources. You can't specify a different subnet to deploy
the gateway resources to. If you don't have a subnet named 'GatewaySubnet', when you create your VPN
gateway, it will fail.
When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The
number of IP addresses needed depends on the VPN gateway configuration that you want to create. Some
configurations require more IP addresses than others. We recommend that you create a gateway subnet that uses
a /27 or /28.
If you see an error that specifies that the address space overlaps with a subnet, or that the subnet is not contained
within the address space for your virtual network, check your VNet address range. You may not have enough IP
addresses available in the address range you created for your virtual network. For example, if your default subnet
encompasses the entire address range, there are no IP addresses left to create additional subnets. You can either
adjust your subnets within the existing address space to free up IP addresses, or specify an additional address
range and create the gateway subnet there.
1. In the portal, navigate to the virtual network for which you want to create a virtual network gateway.
2. In the Settings section of your VNet page, click Subnets to expand the Subnets page.
3. On the Subnets page, click +Gateway subnet at the top to open the Add subnet page.

4. The Name for your subnet is automatically filled in with the value 'GatewaySubnet'. The GatewaySubnet
value is required in order for Azure to recognize the subnet as the gateway subnet. Adjust the auto-filled
Address range values to match your configuration requirements.

5. To create the subnet, click OK at the bottom of the page.

IMPORTANT
When working with gateway subnets, avoid associating a network security group (NSG) to the gateway subnet. Associating
a network security group to this subnet may cause your VPN gateway to stop functioning as expected. For more
information about network security groups, see What is a network security group?

4. Create the VPN gateway


1. On the left side of the portal page, click + and type 'Virtual Network Gateway' in search. In Results, locate and
click Virtual network gateway.
2. At the bottom of the 'Virtual network gateway' page, click Create. This opens the Create virtual network
gateway page.
3. On the Create virtual network gateway page, specify the values for your virtual network gateway.
Name: Name your gateway. This is not the same as naming a gateway subnet. It's the name of the
gateway object you are creating.
Gateway type: Select VPN. VPN gateways use the virtual network gateway type VPN.
VPN type: Select the VPN type that is specified for your configuration. Most configurations require a
Route-based VPN type.
SKU: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the
VPN type you select. For more information about gateway SKUs, see Gateway SKUs.
Location: You may need to scroll to see Location. Adjust the Location field to point to the location
where your virtual network is located. If the location is not pointing to the region where your virtual
network resides, when you select a virtual network in the next step, it will not appear in the drop-down
list.
Virtual network: Choose the virtual network to which you want to add this gateway. Click Virtual
network to open the 'Choose a virtual network' page. Select the VNet. If you don't see your VNet, make
sure the Location field is pointing to the region in which your virtual network is located.
Gateway subnet address range: You will only see this setting if you did not previously create a
gateway subnet for your virtual network. If you previously created a valid gateway subnet, this setting
will not appear.
First IP configuration: The 'Choose public IP address' page creates a public IP address object that
gets associated to the VPN gateway. The public IP address is dynamically assigned to this object
when the VPN gateway is created. VPN Gateway currently only supports Dynamic Public IP address
allocation. However, this does not mean that the IP address changes after it has been assigned to
your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and
re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of
your VPN gateway.
First, click Create gateway IP configuration to open the 'Choose public IP address' page, then
click +Create new to open the 'Create public IP address' page.
Next, input a Name for your public IP address. Leave the SKU as Basic unless there is a
specific reason to change it to something else, then click OK at the bottom of this page to save
your changes.

4. Verify the settings. You can select Pin to dashboard at the bottom of the page if you want your gateway to
appear on the dashboard.
5. Click Create to begin creating the VPN gateway. The settings are validated and you'll see the "Deploying
Virtual network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need
to refresh your portal page to see the completed status.
After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in
the portal. The gateway appears as a connected device. You can click the connected device (your virtual network
gateway) to view more information.

5. Create the local network gateway


The local network gateway typically refers to your on-premises location. You give the site a name by which Azure
can refer to it, then specify the IP address of the on-premises VPN device to which you will create a connection.
You also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The
address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network
changes or you need to change the public IP address for the VPN device, you can easily update the values later.
1. In the portal, click +Create a resource.
2. In the search box, type Local network gateway, then press Enter to search. This will return a list of
results. Click Local network gateway, then click the Create button to open the Create local network
gateway page.

3. On the Create local network gateway page, specify the values for your local network gateway.
Name: Specify a name for your local network gateway object.
IP address: This is the public IP address of the VPN device that you want Azure to connect to. Specify a
valid public IP address. The IP address cannot be behind NAT and has to be reachable by Azure. If you
don't have the IP address right now, you can use the values shown in the example, but you'll need to go
back and replace your placeholder IP address with the public IP address of your VPN device. Otherwise,
Azure will not be able to connect.
Address Space refers to the address ranges for the network that this local network represents. You can
add multiple address space ranges. Make sure that the ranges you specify here do not overlap with
ranges of other networks that you want to connect to. Azure will route the address range that you
specify to the on-premises VPN device IP address. Use your own values here if you want to connect to
your on-premises site, not the values shown in the example.
Configure BGP settings: Use only when configuring BGP. Otherwise, don't select this.
Subscription: Verify that the correct subscription is showing.
Resource Group: Select the resource group that you want to use. You can either create a new resource
group, or select one that you have already created.
Location: Select the location that this object will be created in. You may want to select the same location
that your VNet resides in, but you are not required to do so.
4. When you have finished specifying the values, click the Create button at the bottom of the page to create
the local network gateway.

6. Configure your VPN device


Site-to-Site connections to an on-premises network require a VPN device. In this step, you configure your VPN
device. When configuring your VPN device, you need the following:
A shared key. This is the same shared key that you specify when creating your Site-to-Site VPN connection. In
our examples, we use a basic shared key. We recommend that you generate a more complex key to use.
The Public IP address of your virtual network gateway. You can view the public IP address by using the Azure
portal, PowerShell, or CLI. To find the Public IP address of your VPN gateway using the Azure portal, navigate
to Virtual network gateways, then click the name of your gateway.
To download VPN device configuration scripts:
Depending on the VPN device that you have, you may be able to download a VPN device configuration script. For
more information, see Download VPN device configuration scripts.
See the following links for additional configuration information:
For information about compatible VPN devices, see VPN Devices.
Before configuring your VPN device, check for any Known device compatibility issues for the VPN device
that you want to use.
For links to device configuration settings, see Validated VPN Devices. The device configuration links are
provided on a best-effort basis. It's always best to check with your device manufacturer for the latest
configuration information. The list shows the versions we have tested. If your OS is not on that list, it is still
possible that the version is compatible. Check with your device manufacturer to verify that OS version for
your VPN device is compatible.
For an overview of VPN device configuration, see Overview of 3rd party VPN device configurations.
For information about editing device configuration samples, see Editing samples.
For cryptographic requirements, see About cryptographic requirements and Azure VPN gateways.
For information about IPsec/IKE parameters, see About VPN devices and IPsec/IKE parameters for Site-to-
Site VPN gateway connections. This link shows information about IKE version, Diffie-Hellman Group,
Authentication method, encryption and hashing algorithms, SA lifetime, PFS, and DPD, in addition to other
parameter information that you need to complete your configuration.
For IPsec/IKE policy configuration steps, see Configure IPsec/IKE policy for S2S VPN or VNet-to-VNet
connections.
To connect multiple policy-based VPN devices, see Connect Azure VPN gateways to multiple on-premises
policy-based VPN devices using PowerShell.

7. Create the VPN connection


Create the Site-to-Site VPN connection between your virtual network gateway and your on-premises VPN device.
1. Navigate to and open the page for your virtual network gateway. There are multiple ways to navigate. You can
navigate to the gateway 'VNet1GW' by going to TestVNet1 -> Overview -> Connected devices ->
VNet1GW.
2. On the page for VNet1GW, click Connections. At the top of the Connections page, click +Add to open the
Add connection page.
3. On the Add connection page, configure the values for your connection.
Name: Name your connection.
Connection type: Select Site-to-site(IPSec).
Virtual network gateway: The value is fixed because you are connecting from this gateway.
Local network gateway: Click Choose a local network gateway and select the local network
gateway that you want to use.
Shared Key: the value here must match the value that you are using for your local on-premises VPN
device. The example uses 'abc123', but you can (and should) use something more complex. The
important thing is that the value you specify here must be the same value that you specify when
configuring your VPN device.
The remaining values for Subscription, Resource Group, and Location are fixed.
4. Click OK to create your connection. You'll see Creating Connection flash on the screen.
5. You can view the connection in the Connections page of the virtual network gateway. The Status will go from
Unknown to Connecting, and then to Succeeded.

8. Verify the VPN connection


In the Azure portal, you can view the connection status of a Resource Manager VPN Gateway by navigating to the
connection. The following steps show one way to navigate to your connection and verify.
1. In the Azure portal, click All resources and navigate to your virtual network gateway.
2. On the blade for your virtual network gateway, click Connections. You can see the status of each connection.
3. Click the name of the connection that you want to verify to open Essentials. In Essentials, you can view
more information about your connection. The Status is 'Succeeded' and 'Connected' when you have made
a successful connection.

To connect to a virtual machine


You can connect to a VM that is deployed to your VNet by creating a Remote Desktop Connection to your VM.
The best way to initially verify that you can connect to your VM is to connect by using its private IP address, rather
than computer name. That way, you are testing to see if you can connect, not whether name resolution is
configured properly.
1. Locate the private IP address. You can find the private IP address of a VM in multiple ways. Below, we show
the steps for the Azure portal and for PowerShell.
Azure portal - Locate your virtual machine in the Azure portal. View the properties for the VM. The
private IP address is listed.
PowerShell - Use the example to view a list of VMs and private IP addresses from your resource
groups. You don't need to modify this example before using it.
$VMs = Get-AzureRmVM
$Nics = Get-AzureRmNetworkInterface | Where VirtualMachine -ne $null

foreach($Nic in $Nics)
{
$VM = $VMs | Where-Object -Property Id -eq $Nic.VirtualMachine.Id
$Prv = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAddress
$Alloc = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAllocationMethod
Write-Output "$($VM.Name): $Prv,$Alloc"
}

2. Verify that you are connected to your VNet using the VPN connection.
3. Open Remote Desktop Connection by typing "RDP" or "Remote Desktop Connection" in the search box on
the taskbar, then select Remote Desktop Connection. You can also open Remote Desktop Connection using the
'mstsc' command in PowerShell.
4. In Remote Desktop Connection, enter the private IP address of the VM. You can click "Show Options" to adjust
additional settings, then connect.
To troubleshoot an RDP connection to a VM
If you are having trouble connecting to a virtual machine over your VPN connection, check the following:
Verify that your VPN connection is successful.
Verify that you are connecting to the private IP address for the VM.
If you can connect to the VM using the private IP address, but not the computer name, verify that you have
configured DNS properly. For more information about how name resolution works for VMs, see Name
Resolution for VMs.
For more information about RDP connections, see Troubleshoot Remote Desktop connections to a VM.

How to reset a VPN gateway


Resetting an Azure VPN gateway is helpful if you lose cross-premises VPN connectivity on one or more Site-to-
Site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly, but are not able to
establish IPsec tunnels with the Azure VPN gateways. For steps, see Reset a VPN gateway.

How to change a gateway SKU (resize a gateway)


For the steps to change a gateway SKU, see Gateway SKUs.

How to add an additional connection to a VPN gateway


You can add additional connections, provided that none of the address spaces overlap between connections.
1. To add an additional connection, navigate to the VPN gateway, then click Connections to open the
Connections page.
2. Click +Add to add your connection. Adjust the connection type to reflect either VNet-to-VNet (if connecting to
another VNet gateway), or Site-to-site.
3. If you are connecting using Site-to-site and you have not already created a local network gateway for the site
you want to connect to, you can create a new one.
4. Specify the shared key that you want to use, then click OK to create the connection.

Next steps
For information about BGP, see the BGP Overview and How to configure BGP.
For information about forced tunneling, see About forced tunneling.
For information about Highly Available Active-Active connections, see Highly Available cross-premises and
VNet-to-VNet connectivity.
For information about how to limit network traffic to resources in a virtual network, see Network Security.
For information about how Azure routes traffic between Azure, on-premises, and Internet resources, see
Virtual network traffic routing.
For information about creating a Site-to-Site VPN connection using Azure Resource Manager template, see
Create a Site-to-Site VPN Connection.
For information about creating a Vnet-to-Vnet VPN connection using Azure Resource Manager template, see
Deploy HBase geo replication.
Configure a Point-to-Site connection to a VNet using
native Azure certificate authentication: Azure portal
3/21/2018 • 27 minutes to read • Edit Online

This article helps you securely connect individual clients running Windows or Mac OS X to an Azure VNet. Point-
to-Site VPN connections are useful when you want to connect to your VNet from a remote location, such when you
are telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have
only a few clients that need to connect to a VNet. Point-to-Site connections do not require a VPN device or a
public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or
IKEv2. For more information about Point-to-Site VPN, see About Point-to-Site VPN.

Architecture
Point-to-Site native Azure certificate authentication connections use the following items, which you configure in
this exercise:
A RouteBased VPN gateway.
The public key (.cer file) for a root certificate, which is uploaded to Azure. Once the certificate is uploaded, it is
considered a trusted certificate and is used for authentication.
A client certificate that is generated from the root certificate. The client certificate installed on each client
computer that will connect to the VNet. This certificate is used for client authentication.
A VPN client configuration. The VPN client configuration files contain the necessary information for the client to
connect to the VNet. The files configure the existing VPN client that is native to the operating system. Each client
that connects must be configured using the settings in the configuration files.
Example values
You can use the following values to create a test environment, or refer to these values to better understand the
examples in this article:
VNet Name: VNet1
Address space: 192.168.0.0/16
For this example, we use only one address space. You can have more than one address space for your VNet.
Subnet name: FrontEnd
Subnet address range: 192.168.1.0/24
Subscription: If you have more than one subscription, verify that you are using the correct one.
Resource Group: TestRG
Location: East US
GatewaySubnet: 192.168.200.0/24
DNS Server: (optional) IP address of the DNS server that you want to use for name resolution.
Virtual network gateway name: VNet1GW
Gateway type: VPN
VPN type: Route-based
Public IP address name: VNet1GWpip
Connection type: Point-to-site
Client address pool: 172.16.201.0/24
VPN clients that connect to the VNet using this Point-to-Site connection receive an IP address from the client
address pool.

1. Create a virtual network


Before beginning, verify that you have an Azure subscription. If you don't already have an Azure subscription, you
can activate your MSDN subscriber benefits or sign up for a free account.
To create a VNet in the Resource Manager deployment model by using the Azure portal, follow the steps below.
The screenshots are provided as examples. Be sure to replace the values with your own. For more information
about working with virtual networks, see the Virtual Network Overview.

NOTE
If you want this VNet to connect to an on-premises location (in addition to creating a P2S configuration), you need to
coordinate with your on-premises network administrator to carve out an IP address range that you can use specifically for
this virtual network. If a duplicate address range exists on both sides of the VPN connection, traffic does not route the way
you may expect it to. Additionally, if you want to connect this VNet to another VNet, the address space cannot overlap with
other VNet. Take care to plan your network configuration accordingly.

1. From a browser, navigate to the Azure portal and, if necessary, sign in with your Azure account.
2. Click +. In the Search the marketplace field, type "Virtual Network". Locate Virtual Network from the
returned list and click to open the Virtual Network page.

3. Near the bottom of the Virtual Network page, from the Select a deployment model list, select Resource
Manager, and then click Create.
4. On the Create virtual network page, configure the VNet settings. When you fill in the fields, the red
exclamation mark becomes a green check mark when the characters entered in the field are valid. There may
be values that are auto-filled. If so, replace the values with your own. The Create virtual network page
looks similar to the following example:

5. Name: Enter the name for your Virtual Network.


6. Address space: Enter the address space. If you have multiple address spaces to add, add your first address
space. You can add additional address spaces later, after creating the VNet.
7. Subscription: Verify that the Subscription listed is the correct one. You can change subscriptions by using the
drop-down.
8. Resource group: Select an existing resource group, or create a new one by typing a name for your new
resource group. If you are creating a new group, name the resource group according to your planned
configuration values. For more information about resource groups, visit Azure Resource Manager Overview.
9. Location: Select the location for your VNet. The location determines where the resources that you deploy to
this VNet will reside.
10. Subnet: Add the subnet name and subnet address range. You can add additional subnets later, after creating the
VNet.
11. Select Pin to dashboard if you want to be able to find your VNet easily on the dashboard, and then click
Create.

12. After clicking Create, you will see a tile on your dashboard that will reflect the progress of your VNet. The
tile changes as the VNet is being created.
2. Add a gateway subnet
Before connecting your virtual network to a gateway, you first need to create the gateway subnet for the virtual
network to which you want to connect. The gateway services use the IP addresses specified in the gateway subnet.
If possible, create a gateway subnet using a CIDR block of /28 or /27 to provide enough IP addresses to
accommodate additional future configuration requirements.
1. In the portal, navigate to the Resource Manager virtual network for which you want to create a virtual network
gateway.
2. In the Settings section of your VNet page, click Subnets to expand the Subnets page.
3. On the Subnets page, click +Gateway subnet to open the Add subnet page.

4. The Name for your subnet is automatically filled in with the value 'GatewaySubnet'. This value is required in
order for Azure to recognize the subnet as the gateway subnet. Adjust the auto-filled Address range values
to match your configuration requirements, then click OK at the bottom of the page to create the subnet.

3. Specify a DNS server (optional)


After you create your virtual network, you can add the IP address of a DNS server to handle name resolution. The
DNS server is optional for this configuration, but required if you want name resolution. Specifying a value does not
create a new DNS server. The DNS server IP address that you specify should be a DNS server that can resolve the
names for the resources you are connecting to. For this example, we used a private IP address, but it is likely that
this is not the IP address of your DNS server. Be sure to use your own values. The value you specify is used by the
resources that you deploy to the VNet, not by the P2S connection or the VPN client.
1. On the Settings page for your virtual network, navigate to DNS Servers and click to open the DNS
servers page.
DNS Servers: Select Custom.
Add DNS server: Enter the IP address of the DNS server that you want to use for name resolution.
2. When you are done adding DNS servers, click Save at the top of the page.

4. Create a virtual network gateway


1. In the portal, on the left side, click + Create a resource and type 'Virtual Network Gateway' in search. Locate
Virtual network gateway in the search return and click the entry. On the Virtual network gateway page,
click Create at the bottom of the page to open the Create virtual network gateway page.
2. On the Create virtual network gateway page, fill in the values for your virtual network gateway.

3. On the Create virtual network gateway page, specify the values for your virtual network gateway.
Name: Name your gateway. This is not the same as naming a gateway subnet. It's the name of the
gateway object you are creating.
Gateway type: Select VPN. VPN gateways use the virtual network gateway type VPN.
VPN type: Select the VPN type that is specified for your configuration. Most configurations require a
Route-based VPN type.
SKU: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the VPN
type you select. For more information about gateway SKUs, see Gateway SKUs.
Location: You may need to scroll to see Location. Adjust the Location field to point to the location
where your virtual network is located. If the location is not pointing to the region where your virtual
network resides, when you select a virtual network in the next step, it will not appear in the drop-down
list.
Virtual network: Choose the virtual network to which you want to add this gateway. Click Virtual
network to open the 'Choose a virtual network' page. Select the VNet. If you don't see your VNet, make
sure the Location field is pointing to the region in which your virtual network is located.
Gateway subnet address range: You will only see this setting if you did not previously create a gateway
subnet for your virtual network. If you previously created a valid gateway subnet, this setting will not
appear.
First IP configuration: The 'Choose public IP address' page creates a public IP address object that
gets associated to the VPN gateway. The public IP address is dynamically assigned to this object
when the VPN gateway is created. VPN Gateway currently only supports Dynamic Public IP address
allocation. However, this does not mean that the IP address changes after it has been assigned to your
VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-
created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your
VPN gateway.
First, click Create gateway IP configuration to open the 'Choose public IP address' page, then
click +Create new to open the 'Create public IP address' page.
Next, input a Name for your public IP address. Leave the SKU as Basic unless there is a
specific reason to change it to something else, then click OK at the bottom of this page to save
your changes.

4. Verify the settings. You can select Pin to dashboard at the bottom of the page if you want your gateway to
appear on the dashboard.
5. Click Create to begin creating the VPN gateway. The settings are validated and you'll see the "Deploying Virtual
network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need to refresh
your portal page to see the completed status.
After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in
the portal. The gateway appears as a connected device. You can click the connected device (your virtual network
gateway) to view more information.
NOTE
The Basic SKU does not support IKEv2 or RADIUS authentication.

5. Generate certificates
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection.
Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then
considered 'trusted' by Azure for connection over P2S to the virtual network. You also generate client certificates
from the trusted root certificate, and then install them on each client computer. The client certificate is used to
authenticate the client when it initiates a connection to the VNet.
1. Obtain the .cer file for the root certificate
You can use either a root certificate that was generated using an enterprise solution (recommended), or you can
generate a self-signed certificate. After creating the root certificate, export the public certificate data (not the private
key) as a Base-64 encoded X.509 .cer file and upload the public certificate data to Azure.
Enterprise certificate: If you are using an enterprise solution, you can use your existing certificate chain.
Obtain the .cer file for the root certificate that you want to use.
Self-signed root certificate: If you aren't using an enterprise certificate solution, you need to create a self-
signed root certificate. It's important that you follow the steps in one of the P2S certificate articles below.
Otherwise, the certificates you create won't be compatible with P2S connections and clients receive a
connection error when trying to connect. You can use Azure PowerShell, MakeCert, or OpenSSL. The steps
in the provided articles generate a compatible certificate:
Windows 10 PowerShell instructions: These instructions require Windows 10 and PowerShell to
generate certificates. Client certificates that are generated from the root certificate can be installed on any
supported P2S client.
MakeCert instructions: Use MakeCert if you don't have access to a Windows 10 computer to use to
generate certificates. MakeCert deprecated, but you can still use MakeCert to generate certificates. Client
certificates that are generated from the root certificate can be installed on any supported P2S client.
2. Generate a client certificate
Each client computer that connects to a VNet using Point-to-Site must have a client certificate installed. The client
certificate is generated from the root certificate and installed on each client computer. If a valid client certificate is
not installed and the client tries to connect to the VNet, authentication fails.
You can either generate a unique certificate for each client, or you can use the same certificate for multiple clients.
The advantage to generating unique client certificates is the ability to revoke a single certificate. Otherwise, if
multiple clients are using the same client certificate and you need to revoke it, you have to generate and install new
certificates for all the clients that use that certificate to authenticate.
You can generate client certificates using the following methods:
Enterprise certificate:
If you are using an enterprise certificate solution, generate a client certificate with the common name
value format 'name@yourdomain.com', rather than the 'domain name\username' format.
Make sure the client certificate is based on the 'User' certificate template that has 'Client Authentication'
as the first item in the use list, rather than Smart Card Logon, etc. You can check the certificate by double-
clicking the client certificate and viewing Details > Enhanced Key Usage.
Self-signed root certificate: It's important that you follow the steps in one of the P2S certificate articles
below. Otherwise, the client certificates you create won't be compatible with P2S connections and clients
receive an error when trying to connect. The steps in either of the following articles generate a compatible
client certificate:
Windows 10 PowerShell instructions: These instructions require Windows 10 and PowerShell to
generate certificates. The certificates that are generated can be installed on any supported P2S client.
MakeCert instructions: Use MakeCert if you don't have access to a Windows 10 computer to use to
generate certificates. MakeCert deprecated, but you can still use MakeCert to generate certificates. The
certificates that are generated can be installed on any supported P2S client.
When you generate a client certificate from a self-signed root certificate using the preceding instructions, it's
automatically installed on the computer that you used to generate it. If you want to install a client certificate
on another client computer, you need to export it as a .pfx, along with the entire certificate chain. This creates
a .pfx file that contains the root certificate information that is required for the client to successfully
authenticate. For steps to export a certificate, see Certificates - export a client certificate.

6. Add the client address pool


The client address pool is a range of private IP addresses that you specify. The clients that connect over a Point-to-
Site VPN dynamically receive an IP address from this range. Use a private IP address range that does not overlap
with the on-premises location that you connect from, or the VNet that you want to connect to.
1. Once the virtual network gateway has been created, navigate to the Settings section of the virtual network
gateway page. In the Settings section, click Point-to-site configuration.

2. Click Configure now to open the configuration page.

3. On the Point-to-site configuration page, in the Address pool box, add the private IP address range that
you want to use. VPN clients dynamically receive an IP address from the range that you specify. Click Save
to validate and save the setting.
NOTE
If you don't see Tunnel type or Authentication type in the portal on this page, your gateway is using the Basic SKU.
The Basic SKU does not support IKEv2 or RADIUS authentication.

7. Configure tunnel type


You can select the tunnel type. The two tunnel options are SSTP and IKEv2. The strongSwan client on Android and
Linux and the native IKEv2 VPN client on iOS and OSX will use only IKEv2 tunnel to connect. Windows clients try
IKEv2 first and if that doesn’t connect, they fall back to SSTP. You can choose to enable one of them or both. Select
the checkboxes that your solution requires.

8. Configure authentication type


Select Azure certificate.
9. Upload the root certificate public certificate data
You can upload additional trusted root certificates up to a total of 20. Once the public certificate data is uploaded,
Azure can use it to authenticate clients that have installed a client certificate generated from the trusted root
certificate. Upload the public key information for the root certificate to Azure.
1. Certificates are added on the Point-to-site configuration page in the Root certificate section.
2. Make sure that you exported the root certificate as a Base-64 encoded X.509 (.cer) file. You need to export the
certificate in this format so you can open the certificate with text editor.
3. Open the certificate with a text editor, such as Notepad. When copying the certificate data, make sure that
you copy the text as one continuous line without carriage returns or line feeds. You may need to modify your
view in the text editor to 'Show Symbol/Show all characters' to see the carriage returns and line feeds. Copy
only the following section as one continuous line:

4. Paste the certificate data into the Public Certificate Data field. Name the certificate, and then click Save.
You can add up to 20 trusted root certificates.

5. Click Save at the top of the page to save all of the configuration settings.
10. Install an exported client certificate
If you want to create a P2S connection from a client computer other than the one you used to generate the client
certificates, you need to install a client certificate. When installing a client certificate, you need the password that
was created when the client certificate was exported.
Make sure the client certificate was exported as a .pfx along with the entire certificate chain (which is the default).
Otherwise, the root certificate information isn't present on the client computer and the client won't be able to
authenticate properly.
For install steps, see Install a client certificate.

11. Generate and install the VPN client configuration package


The VPN client configuration files contain settings to configure devices to connect to a VNet over a P2S
connection. For instructions to generate and install VPN client configuration files, see Create and install VPN client
configuration files for native Azure certificate authentication P2S configurations.

12. Connect to Azure


To connect from a Windows VPN client

NOTE
You must have Administrator rights on the Windows client computer from which you are connecting.

1. To connect to your VNet, on the client computer, navigate to VPN connections and locate the VPN
connection that you created. It is named the same name as your virtual network. Click Connect. A pop-up
message may appear that refers to using the certificate. Click Continue to use elevated privileges.
2. On the Connection status page, click Connect to start the connection. If you see a Select Certificate
screen, verify that the client certificate showing is the one that you want to use to connect. If it is not, use the
drop-down arrow to select the correct certificate, and then click OK.
3. Your connection is established.

Troubleshoot Windows P2S connections


If you are having trouble connecting, check the following items:
If you exported a client certificate, make sure that you exported it as a .pfx file using the default value
'Include all certificates in the certification path if possible'. When you export it using this value, the root
certificate information is also exported. When the certificate is installed on the client computer, the root
certificate which is contained in the .pfx file is then also installed on the client computer. The client computer
must have the root certificate information installed. To check, go to Manage user certificates and navigate
to Trusted Root Certification Authorities\Certificates. Verify that the root certificate is listed. The root
certificate must be present in order for authentication to work.
If you are using a certificate that was issued using an Enterprise CA solution and are having trouble
authenticating, check the authentication order on the client certificate. You can check the authentication list
order by double-clicking the client certificate, and going to Details > Enhanced Key Usage. Make sure the
list shows 'Client Authentication' as the first item. If not, you need to issue a client certificate based on the
User template that has Client Authentication as the first item in the list.
For additional P2S troubleshooting information, see Troubleshoot P2S connections.
To connect from a Mac VPN client
From the Network dialog box, locate the client profile that you want to use, specify the settings from the
VpnSettings.xml, and then click Connect.
To verify your connection
These instructions apply to Windows clients.
1. To verify that your VPN connection is active, open an elevated command prompt, and run ipconfig/all.
2. View the results. Notice that the IP address you received is one of the addresses within the Point-to-Site
VPN Client Address Pool that you specified in your configuration. The results are similar to this example:

PPP adapter VNet1:


Connection-specific DNS Suffix .:
Description.....................: VNet1
Physical Address................:
DHCP Enabled....................: No
Autoconfiguration Enabled.......: Yes
IPv4 Address....................: 172.16.201.3(Preferred)
Subnet Mask.....................: 255.255.255.255
Default Gateway.................:
NetBIOS over Tcpip..............: Enabled

To connect to a virtual machine


These instructions apply to Windows clients.
You can connect to a VM that is deployed to your VNet by creating a Remote Desktop Connection to your VM. The
best way to initially verify that you can connect to your VM is to connect by using its private IP address, rather than
computer name. That way, you are testing to see if you can connect, not whether name resolution is configured
properly.
1. Locate the private IP address. You can find the private IP address of a VM by either looking at the properties
for the VM in the Azure portal, or by using PowerShell.
Azure portal - Locate your virtual machine in the Azure portal. View the properties for the VM. The
private IP address is listed.
PowerShell - Use the example to view a list of VMs and private IP addresses from your resource
groups. You don't need to modify this example before using it.
$VMs = Get-AzureRmVM
$Nics = Get-AzureRmNetworkInterface | Where VirtualMachine -ne $null

foreach($Nic in $Nics)
{
$VM = $VMs | Where-Object -Property Id -eq $Nic.VirtualMachine.Id
$Prv = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAddress
$Alloc = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAllocationMethod
Write-Output "$($VM.Name): $Prv,$Alloc"
}

2. Verify that you are connected to your VNet using the Point-to-Site VPN connection.
3. Open Remote Desktop Connection by typing "RDP" or "Remote Desktop Connection" in the search box on
the taskbar, then select Remote Desktop Connection. You can also open Remote Desktop Connection using the
'mstsc' command in PowerShell.
4. In Remote Desktop Connection, enter the private IP address of the VM. You can click "Show Options" to adjust
additional settings, then connect.
To troubleshoot an RDP connection to a VM
If you are having trouble connecting to a virtual machine over your VPN connection, check the following:
Verify that your VPN connection is successful.
Verify that you are connecting to the private IP address for the VM.
Use 'ipconfig' to check the IPv4 address assigned to the Ethernet adapter on the computer from which you are
connecting. If the IP address is within the address range of the VNet that you are connecting to, or within the
address range of your VPNClientAddressPool, this is referred to as an overlapping address space. When your
address space overlaps in this way, the network traffic doesn't reach Azure, it stays on the local network.
If you can connect to the VM using the private IP address, but not the computer name, verify that you have
configured DNS properly. For more information about how name resolution works for VMs, see Name
Resolution for VMs.
Verify that the VPN client configuration package was generated after the DNS server IP addresses were
specified for the VNet. If you updated the DNS server IP addresses, generate and install a new VPN client
configuration package.
For more information about RDP connections, see Troubleshoot Remote Desktop connections to a VM.

To add or remove trusted root certificates


You can add and remove trusted root certificates from Azure. When you remove a root certificate, clients that have
a certificate generated from that root won't be able to authenticate, and thus will not be able to connect. If you want
a client to authenticate and connect, you need to install a new client certificate generated from a root certificate that
is trusted (uploaded) to Azure.
To add a trusted root certificate
You can add up to 20 trusted root certificate .cer files to Azure. For instructions, see the section Upload a trusted
root certificate in this article.
To remove a trusted root certificate
1. To remove a trusted root certificate, navigate to the Point-to-site configuration page for your virtual network
gateway.
2. In the Root certificate section of the page, locate the certificate that you want to remove.
3. Click the ellipsis next to the certificate, and then click 'Remove'.

To revoke a client certificate


You can revoke client certificates. The certificate revocation list allows you to selectively deny Point-to-Site
connectivity based on individual client certificates. This is different than removing a trusted root certificate. If you
remove a trusted root certificate .cer from Azure, it revokes the access for all client certificates generated/signed by
the revoked root certificate. Revoking a client certificate, rather than the root certificate, allows the other certificates
that were generated from the root certificate to continue to be used for authentication.
The common practice is to use the root certificate to manage access at team or organization levels, while using
revoked client certificates for fine-grained access control on individual users.
Revoke a client certificate
You can revoke a client certificate by adding the thumbprint to the revocation list.
1. Retrieve the client certificate thumbprint. For more information, see How to retrieve the Thumbprint of a
Certificate.
2. Copy the information to a text editor and remove all spaces so that it is a continuous string.
3. Navigate to the virtual network gateway Point-to-site-configuration page. This is the same page that you
used to upload a trusted root certificate.
4. In the Revoked certificates section, input a friendly name for the certificate (it doesn't have to be the certificate
CN ).
5. Copy and paste the thumbprint string to the Thumbprint field.
6. The thumbprint validates and is automatically added to the revocation list. A message appears on the screen
that the list is updating.
7. After updating has completed, the certificate can no longer be used to connect. Clients that try to connect using
this certificate receive a message saying that the certificate is no longer valid.

Point-to-Site FAQ
What client operating systems can I use with Point-to -Site?
The following client operating systems are supported:
Windows 7 (32-bit and 64-bit)
Windows Server 2008 R2 (64-bit only)
Windows 8.1 (32-bit and 64-bit)
Windows Server 2012 (64-bit only)
Windows Server 2012 R2 (64-bit only)
Windows Server 2016 (64-bit only)
Windows 10
Mac OS X version 10.11 (El Capitan)
Mac OS X version 10.12 (Sierra)
Linux (StrongSwan)
iOS
NOTE
Starting July 1, 2018, support is being removed for TLS 1.0 and 1.1 from Azure VPN Gateway. VPN Gateway will support only
TLS 1.2. To maintain TLS support and connectivity for your Windows 7 and Windows 8 point-to-site clients that use TLS, we
recommend that you install the following updates:
• Update for Microsoft EAP implementation that enables the use of TLS
• Update to enable TLS 1.1 and TLS 1.2 as default secure protocols in WinHTTP
The following legacy algorithms will also be deprecated for TLS on July 1, 2018:
RC4 (Rivest Cipher 4)
DES (Data Encryption Algorithm)
3DES (Triple Data Encryption Algorithm)
MD5 (Message Digest 5)
SHA-1 (Secure Hash Algorithm 1)

How many VPN client endpoints can I have in my Point-to -Site configuration?
We support up to 128 VPN clients to be able to connect to a virtual network at the same time.
Can I traverse proxies and firewalls using Point-to -Site capability?
Azure supports two types of Point-to-site VPN options:
Secure Socket Tunneling Protocol (SSTP ). SSTP is a Microsoft proprietary SSL -based solution that can
penetrate firewalls since most firewalls open the TCP port that 443 SSL uses.
IKEv2 VPN. IKEv2 VPN is a standards-based IPsec VPN solution that uses UDP port 500 and 4500 and IP
protocol no. 50. Firewalls do not always open these ports, so there is a possibility of IKEv2 VPN not being
able to traverse proxies and firewalls.
If I restart a client computer configured for Point-to -Site, will the VPN automatically reconnect?
By default, the client computer will not reestablish the VPN connection automatically.
Does Point-to -Site support auto -reconnect and DDNS on the VPN clients?
Auto-reconnect and DDNS are currently not supported in Point-to-Site VPNs.
Can I have Site -to -Site and Point-to -Site configurations coexist for the same virtual network?
Yes. For the Resource Manager deployment model, you must have a RouteBased VPN type for your gateway. For
the classic deployment model, you need a dynamic gateway. We do not support Point-to-Site for static routing
VPN gateways or PolicyBased VPN gateways.
Can I configure a Point-to -Site client to connect to multiple virtual networks at the same time?
No. A Point-to-Site client can only connect to resources in the VNet in which the virtual network gateway resides.
How much throughput can I expect through Site -to -Site or Point-to -Site connections?
It's difficult to maintain the exact throughput of the VPN tunnels. IPsec and SSTP are crypto-heavy VPN protocols.
Throughput is also limited by the latency and bandwidth between your premises and the Internet. For a VPN
Gateway with only IKEv2 Point-to-Site VPN connections, the total throughput that you can expect depends on the
Gateway SKU. For more information on throughput, see Gateway SKUs.
Can I use any software VPN client for Point-to -Site that supports SSTP and/or IKEv2?
No. You can only use the native VPN client on Windows for SSTP, and the native VPN client on Mac for IKEv2.
Refer to the list of supported client operating systems.
Does Azure support IKEv2 VPN with Windows?
IKEv2 is supported on Windows 10 and Server 2016. However, in order to use IKEv2, you must install updates and
set a registry key value locally. OS versions prior to Windows 10 are not supported and can only use SSTP.
To prepare Windows 10 or Server 2016 for IKEv2:
1. Install the update.

OS VERSION DATE NUMBER/LINK

Windows Server 2016 January 17, 2018 KB4057142


Windows 10 Version 1607

Windows 10 Version 1703 January 17, 2018 KB4057144

2. Set the registry key value. Create or set


“HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RasMan\ IKEv2\DisableCertReqPayload”
REG_DWORD key in the registry to 1.
What happens when I configure both SSTP and IKEv2 for P2S VPN connections?
When you configure both SSTP and IKEv2 in a mixed environment (consisting of Windows and Mac devices), the
Windows VPN client will always try IKEv2 tunnel first, but will fall back to SSTP if the IKEv2 connection is not
successful. MacOSX will only connect via IKEv2.
Other than Windows and Mac, which other platforms does Azure support for P2S VPN?
Azure supports only Windows and Mac for P2S VPN.
I already have an Azure VPN Gateway deployed. Can I enable RADIUS and/or IKEv2 VPN on it?
Yes, you can enable these new features on already deployed gateways using Powershell or the Azure portal,
provided that the gateway SKU that you are using supports RADIUS and/or IKEv2. For example, the VPN gateway
Basic SKU does not support RADIUS or IKEv2.
Can I use my own internal PKI root CA for Point-to -Site connectivity?
Yes. Previously, only self-signed root certificates could be used. You can still upload 20 root certificates.
What tools can I use to create certificates?
You can use your Enterprise PKI solution (your internal PKI), Azure PowerShell, MakeCert, and OpenSSL.
Are there instructions for certificate settings and parameters?
Internal PKI/Enterprise PKI solution: See the steps to Generate certificates.
Azure PowerShell: See the Azure PowerShell article for steps.
MakeCert: See the MakeCert article for steps.
OpenSSL:
When exporting certificates, be sure to convert the root certificate to Base64.
For the client certificate:
When creating the private key, specify the length as 4096.
When creating the certificate, for the -extensions parameter, specify usr_cert.

Next steps
Once your connection is complete, you can add virtual machines to your virtual networks. For more information,
see Virtual Machines. To understand more about networking and virtual machines, see Azure and Linux VM
network overview.
For P2S troubleshooting information, Troubleshooting Azure point-to-site connections.
Create and modify an ExpressRoute circuit
2/16/2018 • 5 minutes to read • Edit Online

This article describes how to create an Azure ExpressRoute circuit by using the Azure portal and the Azure
Resource Manager deployment model. The following steps also show you how to check the status of the circuit,
update it, or delete and deprovision it.

Before you begin


Review the prerequisites and workflows before you begin configuration.
Ensure that you have access to the Azure portal.
Ensure that you have permissions to create new networking resources. Contact your account administrator if
you do not have the right permissions.
You can view a video before beginning in order to better understand the steps.

Create and provision an ExpressRoute circuit


1. Sign in to the Azure portal
From a browser, navigate to the Azure portal and sign in with your Azure account.
2. Create a new ExpressRoute circuit

IMPORTANT
Your ExpressRoute circuit is billed from the moment a service key is issued. Ensure that you perform this operation when the
connectivity provider is ready to provision the circuit.

1. You can create an ExpressRoute circuit by selecting the option to create a new resource. Click Create a
resource > Networking > ExpressRoute, as shown in the following image:
2. After you click ExpressRoute, you'll see the Create ExpressRoute circuit page. When you're filling in the
values on this page, make sure that you specify the correct SKU tier (Standard, or Premium) and data
metering billing model (Unlimited or Metered).
Tier determines whether an ExpressRoute standard or an ExpressRoute premium add-on is enabled. You
can specify Standard to get the standard SKU or Premium for the premium add-on.
Data metering determines the billing type. You can specify Metered for a metered data plan and
Unlimited for an unlimited data plan. Note that you can change the billing type from Metered to
Unlimited, but you can't change the type from Unlimited to Metered.
Peering Location is the physical location where you are peering with Microsoft.

IMPORTANT
The Peering Location indicates the physical location where you are peering with Microsoft. This is not linked
to "Location" property, which refers to the geography where the Azure Network Resource Provider is located.
While they are not related, it is a good practice to choose a Network Resource Provider geographically close
to the Peering Location of the circuit.

3. View the circuits and properties


View all the circuits
You can view all the circuits that you created by selecting All resources on the left-side menu.
View the properties
You can view the properties of the circuit by selecting it. On the Overview page for your circuit, the service key
appears in the service key field. You must copy the service key for your circuit and pass it down to the service
provider to complete the provisioning process. The circuit service key is specific to your circuit.

4. Send the service key to your connectivity provider for provisioning


On this page, Provider status provides information on the current state of provisioning on the service-provider
side. Circuit status provides the state on the Microsoft side. For more information about circuit provisioning
states, see the Workflows article.
When you create a new ExpressRoute circuit, the circuit is in the following state:
Provider status: Not provisioned
Circuit status: Enabled

The circuit changes to the following state when the connectivity provider is in the process of enabling it for you:
Provider status: Provisioning
Circuit status: Enabled
For you to be able to use an ExpressRoute circuit, it must be in the following state:
Provider status: Provisioned
Circuit status: Enabled
5. Periodically check the status and the state of the circuit key
You can view the properties of the circuit that you're interested in by selecting it. Check the Provider status and
ensure that it has moved to Provisioned before you continue.

6. Create your routing configuration


For step-by-step instructions, refer to the ExpressRoute circuit routing configuration article to create and modify
circuit peerings.

IMPORTANT
These instructions only apply to circuits that are created with service providers that offer layer 2 connectivity services. If
you're using a service provider that offers managed layer 3 services (typically an IP VPN, like MPLS), your connectivity
provider configures and manages routing for you.

7. Link a virtual network to an ExpressRoute circuit


Next, link a virtual network to your ExpressRoute circuit. Use the Linking virtual networks to ExpressRoute circuits
article when you work with the Resource Manager deployment model.

Getting the status of an ExpressRoute circuit


You can view the status of a circuit by selecting it and viewing the Overview page.

Modifying an ExpressRoute circuit


You can modify certain properties of an ExpressRoute circuit without impacting connectivity. You can modify the
bandwidth, SKU, billing model and allow classic operations on the Configuration page. For information on limits
and limitations, see the ExpressRoute FAQ.
You can perform the following tasks with no downtime:
Enable or disable an ExpressRoute Premium add-on for your ExpressRoute circuit.
Increase the bandwidth of your ExpressRoute circuit, provided there is capacity available on the port.
Downgrading the bandwidth of a circuit is not supported.
Change the metering plan from Metered Data to Unlimited Data. Changing the metering plan from Unlimited
Data to Metered Data is not supported.
You can enable and disable Allow Classic Operations.

IMPORTANT
You may have to recreate the ExpressRoute circuit if there is inadequate capacity on the existing port. You cannot upgrade
the circuit if there is no additional capacity available at that location.
Although you can seamlessly upgrade the bandwidth, you cannot reduce the bandwidth of an ExpressRoute circuit without
disruption. Downgrading bandwidth requires you to deprovision the ExpressRoute circuit and then reprovision a new
ExpressRoute circuit.
Disabling the Premium add-on operation can fail if you're using resources that are greater than what is permitted for the
standard circuit.

To modify an ExpressRoute circuit, click Configuration.


Deprovisioning and deleting an ExpressRoute circuit
You can delete your ExpressRoute circuit by selecting the delete icon. Note the following information:
You must unlink all virtual networks from the ExpressRoute circuit. If this operation fails, check whether any
virtual networks are linked to the circuit.
If the ExpressRoute circuit service provider provisioning state is Provisioning or Provisioned you must work
with your service provider to deprovision the circuit on their side. We continue to reserve resources and bill
you until the service provider completes deprovisioning the circuit and notifies us.
If the service provider has deprovisioned the circuit (the service provider provisioning state is set to Not
provisioned), you can delete the circuit. This stops billing for the circuit.

Next steps
After you create your circuit, continue with the following next steps:
Create and modify routing for your ExpressRoute circuit
Link your virtual network to your ExpressRoute circuit
Network monitoring solutions
6/7/2018 • 3 minutes to read • Edit Online

Azure offers a host of solutions to monitor your networking assets. Azure has solutions and utilities to monitor
network connectivity, the health of ExpressRoute circuits, and analyze network traffic in the cloud.

Network Performance Monitor (NPM)


Network Performance Monitor (NPM ) is a suite of capabilities, each of which is geared towards monitoring the
health of your network, network connectivity to your applications, and provides insights into the performance of
your network. NPM is cloud-based and provides a hybrid network monitoring solution that monitors connectivity
between:
Cloud deployments and on-premises locations
Multiple data centers and branch offices
Mission critical multi-tier applications/micro-services
User locations and web-based applications (HTTP/HTTPs)
Performance Monitor, ExpressRoute Monitor, and Service Endpoint Monitor are monitoring capabilities within
NPM and are described below.

Performance Monitor
Performance Monitor is part of NPM and is network monitoring for cloud, hybrid, and on-premises environments.
You can monitor network connectivity across remote branch and field offices, store locations, data centers, and
clouds. You can detect network issues before your users complain. The key advantages are:
Monitor loss and latency across various subnets and set alerts
Monitor all paths (including redundant paths) on the network
Troubleshoot transient and point-in-time network issues, that are difficult to replicate
Determine the specific segment on the network, that is responsible for degraded performance
Monitor the health of the network, without the need for SNMP

For more information, view the following articles:


Configure a Network Performance Monitor Solution in Log Analytics
Use cases
Product Updates: February 2017, August 2017
ExpressRoute Monitor
NPM for ExpressRoute offers comprehensive ExpressRoute monitoring for Azure Private peering and Microsoft
peering connections. You can monitor E2E connectivity and performance between your branch offices and Azure
over ExpressRoute. The key capabilities are:
Auto-detection of ER circuits associated with your subscription
Detection of network topology from on-premises to your cloud applications
Capacity planning, bandwidth utilization analysis
Monitoring and alerting on both primary and secondary paths
Monitoring connectivity to Azure services such as Office 365, Dynamics 365, ... over ExpressRoute
Detect degradation of connectivity to VNets

For more information, see the following articles:


Configure Network Performance Monitor for ExpressRoute
Blog post

Service Endpoint Monitor


With Service Endpoint monitoring, you can now test reachability of applications and detect performance
bottlenecks across on-premises, carrier networks and cloud/private data centers.
Monitor end-to-end network connectivity to applications
Correlate application delivery with network performance, detect precise location of degradation along the path
between the user and the application
Test application reachability from multiple user locations across the globe
Determine network latency and packet loss for your line of business and SaaS applications
Determine hot spots on the network, that may be causing poor application performance
Monitor reachability to Office 365 applications, using built-in tests for Microsoft Office 365, Dynamics 365,
Skype for Business and other Microsoft services
For more information, see the following articles:
Configure Network Performance Monitor for monitoring Service Endpoints
Blog post

Traffic Analytics
Traffic Analytics is a cloud-based solution that provides visibility into user and application activity on your cloud
networks. NSG Flow logs are analyzed to provide insights into:
Traffic flows across your networks between Azure and Internet, public cloud regions, VNETs, and subnets
Applications and protocols on your network, without the need for sniffers or dedicated flow collector appliances
Top talkers, chatty applications, VM conversations in the cloud, traffic hotspots
Sources and destinations of traffic across VNETs, inter-relationships between critical business services and
applications
Security – malicious traffic, ports open to the Internet, applications or VMs attempting Internet access…
Capacity utilization - helps you eliminate issues of over-provisioning or underutilization by monitoring
utilization trends of VPN gateways and other services
Traffic Analytics equips you with actionable information that helps you audit your organization’s network activity,
secure applications and data, optimize workload performance and stay compliant.

Related links:
Blog post, Documentation, FAQ

DNS Analytics
Built for DNS Administrators, this solution collects, analyzes, and correlates DNS logs to provide security,
operations, and performance-related insights. Some of the capabilities are:
Identification of clients that try to resolve to malicious domains
Identification of stale resource records
Visibility into frequently queried domain names and talkative DNS clients
Visibility into the request load on DNS servers
Monitoring of dynamic DNS registration failures

Related links:
Blog post, Documentation

Miscellaneous
New Pricing
Check resource usage against limits
6/6/2018 • 3 minutes to read • Edit Online

In this article, you learn how to see the number of each network resource type that you've deployed in your
subscription and what your subscription limits are. The ability to view resource usage against limits is helpful to
track current usage, and plan for future use. You can use the Azure Portal, PowerShell, or the Azure CLI to track
usage.

Azure Portal
1. Log into the Azure portal.
2. At the top, left corner of the Azure portal, select All services.
3. Enter Subscriptions in the Filter box. When Subscriptions appears in the search results, select it.
4. Select the name of the subscription you want to view usage information for.
5. Under SETTINGS, select Usage + quota.
6. You can select the following options:
Resource types: You can select all resource types, or select the specific types of resources you want to
view.
Providers: You can select all resource providers, or select Compute, Network, or Storage.
Locations: You can select all Azure locations, or select specific locations.
You can select to show all resources, or only the resources where at least one is deployed.
The example in the following picture shows all of the network resources with at least one resource
deployed in the East US:

You can sort the columns by selecting the column heading. The limits shown are the limits for your
subscription. If you need to increase a default limit, select Request Increase, then complete and
submit the support request. All resources have a maximum limit listed in Azure limits. If your current
limit is already at the maximum number, the limit can't be increased.

PowerShell
You can run the commands that follow in the Azure Cloud Shell, or by running PowerShell from your computer.
The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with
your account. If you run PowerShell from your computer, you need the AzureRM PowerShell module, version 6.0.1
or later. Run Get-Module -ListAvailable AzureRM on your computer, to find the installed version. If you need to
upgrade, see Install Azure PowerShell module. If you're running PowerShell locally, you also need to run
Login-AzureRmAccount to log in to Azure.
View your usage against limits with Get-AzureRmNetworkUsage. The following example gets the usage for
resources where at least one resource is deployed in the East US location:

Get-AzureRmNetworkUsage `
-Location eastus `
| Where-Object {$_.CurrentValue -gt 0} `
| Format-Table ResourceType, CurrentValue, Limit

You receive output formatted the same as the following example output:

ResourceType CurrentValue Limit


------------ ------------ -----
Virtual Networks 1 50
Network Security Groups 2 100
Public IP Addresses 1 60
Network Interfaces 1 24000
Network Watchers 1 1

Azure CLI
If using Azure Command-line interface (CLI) commands to complete tasks in this article, either run the commands
in the Azure Cloud Shell, or by running the CLI from your computer. This article requires the Azure CLI version
2.0.32 or later. Run az --version to find the installed version. If you need to install or upgrade, see Install Azure
CLI 2.0. If you're running the Azure CLI locally, you also need to run az login to log in to Azure.
View your usage against limits with az network list-usages. The following example gets the usage for resources in
the East US location:

az network list-usages \
--location eastus \
--out table

You receive output formatted the same as the following example output:

Name CurrentValue Limit


------------ ------------ -----
Virtual Networks 1 50
Network Security Groups 2 100
Public IP Addresses 1 60
Network Interfaces 1 24000
Network Watchers 1 1
Load Balancers 0 100
Application Gateways 0 50
2 minutes to read
Azure CLI Samples for networking
6/27/2017 • 2 minutes to read • Edit Online

The following table includes links to bash scripts built using the Azure CLI.

Connectivity between Azure resources

Create a virtual network for multi-tier applications Creates a virtual network with front-end and back-end
subnets. Traffic to the front-end subnet is limited to HTTP and
SSH, while traffic to the back-end subnet is limited to MySQL,
port 3306.

Peer two virtual networks Creates and connects two virtual networks in the same region.

Route traffic through a network virtual appliance Creates a virtual network with front-end and back-end
subnets and a VM that is able to route traffic between the two
subnets.

Filter inbound and outbound VM network traffic Creates a virtual network with front-end and back-end
subnets. Inbound network traffic to the front-end subnet is
limited to HTTP, HTTPS and SSH. Outbound traffic to the
Internet from the back-end subnet is not permitted.

Load balancing and traffic direction

Load balance traffic to VMs for high availability Creates several virtual machines in a highly available and load
balanced configuration.

Load balance multiple websites on VMs Creates two VMs with multiple IP configurations, joined to an
Azure Availability Set, accessible through an Azure Load
Balancer.

Direct traffic across multiple regions for high application Creates two app service plans, two web apps, a traffic
availability manager profile, and two traffic manager endpoints.
Azure PowerShell Samples for networking
6/27/2017 • 2 minutes to read • Edit Online

The following table includes links to scripts built using Azure PowerShell.

Connectivity between Azure resources

Create a virtual network for multi-tier applications Creates a virtual network with front-end and back-end
subnets. Traffic to the front-end subnet is limited to HTTP,
while traffic to the back-end subnet is limited to SQL, port
1433.

Peer two virtual networks Creates and connects two virtual networks in the same region.

Route traffic through a network virtual appliance Creates a virtual network with front-end and back-end
subnets and a VM that is able to route traffic between the two
subnets.

Filter inbound and outbound VM network traffic Creates a virtual network with front-end and back-end
subnets. Inbound network traffic to the front-end subnet is
limited to HTTP and HTTPS.. Outbound traffic to the Internet
from the back-end subnet is not permitted.

Load balancing and traffic direction

Load balance traffic to VMs for high availability Creates several virtual machines in a highly available and load
balanced configuration.

Load balance multiple websites on VMs Creates two VMs with multiple IP configurations, joined to an
Azure Availability Set, accessible through an Azure Load
Balancer.

Direct traffic across multiple regions for high application Creates two app service plans, two web apps, a traffic
availability manager profile, and two traffic manager endpoints.
2 minutes to read
Configure a Point-to-Site connection to a VNet using
native Azure certificate authentication: Azure portal
3/21/2018 • 27 minutes to read • Edit Online

This article helps you securely connect individual clients running Windows or Mac OS X to an Azure VNet. Point-
to-Site VPN connections are useful when you want to connect to your VNet from a remote location, such when
you are telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you
have only a few clients that need to connect to a VNet. Point-to-Site connections do not require a VPN device or a
public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol),
or IKEv2. For more information about Point-to-Site VPN, see About Point-to-Site VPN.

Architecture
Point-to-Site native Azure certificate authentication connections use the following items, which you configure in
this exercise:
A RouteBased VPN gateway.
The public key (.cer file) for a root certificate, which is uploaded to Azure. Once the certificate is uploaded, it is
considered a trusted certificate and is used for authentication.
A client certificate that is generated from the root certificate. The client certificate installed on each client
computer that will connect to the VNet. This certificate is used for client authentication.
A VPN client configuration. The VPN client configuration files contain the necessary information for the client
to connect to the VNet. The files configure the existing VPN client that is native to the operating system. Each
client that connects must be configured using the settings in the configuration files.
Example values
You can use the following values to create a test environment, or refer to these values to better understand the
examples in this article:
VNet Name: VNet1
Address space: 192.168.0.0/16
For this example, we use only one address space. You can have more than one address space for your VNet.
Subnet name: FrontEnd
Subnet address range: 192.168.1.0/24
Subscription: If you have more than one subscription, verify that you are using the correct one.
Resource Group: TestRG
Location: East US
GatewaySubnet: 192.168.200.0/24
DNS Server: (optional) IP address of the DNS server that you want to use for name resolution.
Virtual network gateway name: VNet1GW
Gateway type: VPN
VPN type: Route-based
Public IP address name: VNet1GWpip
Connection type: Point-to-site
Client address pool: 172.16.201.0/24
VPN clients that connect to the VNet using this Point-to-Site connection receive an IP address from the client
address pool.

1. Create a virtual network


Before beginning, verify that you have an Azure subscription. If you don't already have an Azure subscription, you
can activate your MSDN subscriber benefits or sign up for a free account.
To create a VNet in the Resource Manager deployment model by using the Azure portal, follow the steps below.
The screenshots are provided as examples. Be sure to replace the values with your own. For more information
about working with virtual networks, see the Virtual Network Overview.

NOTE
If you want this VNet to connect to an on-premises location (in addition to creating a P2S configuration), you need to
coordinate with your on-premises network administrator to carve out an IP address range that you can use specifically for
this virtual network. If a duplicate address range exists on both sides of the VPN connection, traffic does not route the way
you may expect it to. Additionally, if you want to connect this VNet to another VNet, the address space cannot overlap with
other VNet. Take care to plan your network configuration accordingly.

1. From a browser, navigate to the Azure portal and, if necessary, sign in with your Azure account.
2. Click +. In the Search the marketplace field, type "Virtual Network". Locate Virtual Network from the
returned list and click to open the Virtual Network page.

3. Near the bottom of the Virtual Network page, from the Select a deployment model list, select Resource
Manager, and then click Create.
4. On the Create virtual network page, configure the VNet settings. When you fill in the fields, the red
exclamation mark becomes a green check mark when the characters entered in the field are valid. There
may be values that are auto-filled. If so, replace the values with your own. The Create virtual network
page looks similar to the following example:

5. Name: Enter the name for your Virtual Network.


6. Address space: Enter the address space. If you have multiple address spaces to add, add your first address
space. You can add additional address spaces later, after creating the VNet.
7. Subscription: Verify that the Subscription listed is the correct one. You can change subscriptions by using the
drop-down.
8. Resource group: Select an existing resource group, or create a new one by typing a name for your new
resource group. If you are creating a new group, name the resource group according to your planned
configuration values. For more information about resource groups, visit Azure Resource Manager Overview.
9. Location: Select the location for your VNet. The location determines where the resources that you deploy to
this VNet will reside.
10. Subnet: Add the subnet name and subnet address range. You can add additional subnets later, after creating
the VNet.
11. Select Pin to dashboard if you want to be able to find your VNet easily on the dashboard, and then click
Create.

12. After clicking Create, you will see a tile on your dashboard that will reflect the progress of your VNet. The
tile changes as the VNet is being created.
2. Add a gateway subnet
Before connecting your virtual network to a gateway, you first need to create the gateway subnet for the virtual
network to which you want to connect. The gateway services use the IP addresses specified in the gateway subnet.
If possible, create a gateway subnet using a CIDR block of /28 or /27 to provide enough IP addresses to
accommodate additional future configuration requirements.
1. In the portal, navigate to the Resource Manager virtual network for which you want to create a virtual network
gateway.
2. In the Settings section of your VNet page, click Subnets to expand the Subnets page.
3. On the Subnets page, click +Gateway subnet to open the Add subnet page.

4. The Name for your subnet is automatically filled in with the value 'GatewaySubnet'. This value is required
in order for Azure to recognize the subnet as the gateway subnet. Adjust the auto-filled Address range
values to match your configuration requirements, then click OK at the bottom of the page to create the
subnet.

3. Specify a DNS server (optional)


After you create your virtual network, you can add the IP address of a DNS server to handle name resolution. The
DNS server is optional for this configuration, but required if you want name resolution. Specifying a value does
not create a new DNS server. The DNS server IP address that you specify should be a DNS server that can
resolve the names for the resources you are connecting to. For this example, we used a private IP address, but it is
likely that this is not the IP address of your DNS server. Be sure to use your own values. The value you specify is
used by the resources that you deploy to the VNet, not by the P2S connection or the VPN client.
1. On the Settings page for your virtual network, navigate to DNS Servers and click to open the DNS
servers page.
DNS Servers: Select Custom.
Add DNS server: Enter the IP address of the DNS server that you want to use for name resolution.
2. When you are done adding DNS servers, click Save at the top of the page.

4. Create a virtual network gateway


1. In the portal, on the left side, click + Create a resource and type 'Virtual Network Gateway' in search. Locate
Virtual network gateway in the search return and click the entry. On the Virtual network gateway page,
click Create at the bottom of the page to open the Create virtual network gateway page.
2. On the Create virtual network gateway page, fill in the values for your virtual network gateway.

3. On the Create virtual network gateway page, specify the values for your virtual network gateway.
Name: Name your gateway. This is not the same as naming a gateway subnet. It's the name of the
gateway object you are creating.
Gateway type: Select VPN. VPN gateways use the virtual network gateway type VPN.
VPN type: Select the VPN type that is specified for your configuration. Most configurations require a
Route-based VPN type.
SKU: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the
VPN type you select. For more information about gateway SKUs, see Gateway SKUs.
Location: You may need to scroll to see Location. Adjust the Location field to point to the location
where your virtual network is located. If the location is not pointing to the region where your virtual
network resides, when you select a virtual network in the next step, it will not appear in the drop-down
list.
Virtual network: Choose the virtual network to which you want to add this gateway. Click Virtual
network to open the 'Choose a virtual network' page. Select the VNet. If you don't see your VNet, make
sure the Location field is pointing to the region in which your virtual network is located.
Gateway subnet address range: You will only see this setting if you did not previously create a
gateway subnet for your virtual network. If you previously created a valid gateway subnet, this setting
will not appear.
First IP configuration: The 'Choose public IP address' page creates a public IP address object that
gets associated to the VPN gateway. The public IP address is dynamically assigned to this object
when the VPN gateway is created. VPN Gateway currently only supports Dynamic Public IP address
allocation. However, this does not mean that the IP address changes after it has been assigned to
your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and
re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of
your VPN gateway.
First, click Create gateway IP configuration to open the 'Choose public IP address' page, then
click +Create new to open the 'Create public IP address' page.
Next, input a Name for your public IP address. Leave the SKU as Basic unless there is a
specific reason to change it to something else, then click OK at the bottom of this page to save
your changes.

4. Verify the settings. You can select Pin to dashboard at the bottom of the page if you want your gateway to
appear on the dashboard.
5. Click Create to begin creating the VPN gateway. The settings are validated and you'll see the "Deploying
Virtual network gateway" tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need
to refresh your portal page to see the completed status.
After the gateway is created, view the IP address that has been assigned to it by looking at the virtual network in
the portal. The gateway appears as a connected device. You can click the connected device (your virtual network
gateway) to view more information.
NOTE
The Basic SKU does not support IKEv2 or RADIUS authentication.

5. Generate certificates
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection.
Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then
considered 'trusted' by Azure for connection over P2S to the virtual network. You also generate client certificates
from the trusted root certificate, and then install them on each client computer. The client certificate is used to
authenticate the client when it initiates a connection to the VNet.
1. Obtain the .cer file for the root certificate
You can use either a root certificate that was generated using an enterprise solution (recommended), or you can
generate a self-signed certificate. After creating the root certificate, export the public certificate data (not the
private key) as a Base-64 encoded X.509 .cer file and upload the public certificate data to Azure.
Enterprise certificate: If you are using an enterprise solution, you can use your existing certificate chain.
Obtain the .cer file for the root certificate that you want to use.
Self-signed root certificate: If you aren't using an enterprise certificate solution, you need to create a self-
signed root certificate. It's important that you follow the steps in one of the P2S certificate articles below.
Otherwise, the certificates you create won't be compatible with P2S connections and clients receive a
connection error when trying to connect. You can use Azure PowerShell, MakeCert, or OpenSSL. The steps
in the provided articles generate a compatible certificate:
Windows 10 PowerShell instructions: These instructions require Windows 10 and PowerShell to
generate certificates. Client certificates that are generated from the root certificate can be installed on
any supported P2S client.
MakeCert instructions: Use MakeCert if you don't have access to a Windows 10 computer to use to
generate certificates. MakeCert deprecated, but you can still use MakeCert to generate certificates.
Client certificates that are generated from the root certificate can be installed on any supported P2S
client.
2. Generate a client certificate
Each client computer that connects to a VNet using Point-to-Site must have a client certificate installed. The client
certificate is generated from the root certificate and installed on each client computer. If a valid client certificate is
not installed and the client tries to connect to the VNet, authentication fails.
You can either generate a unique certificate for each client, or you can use the same certificate for multiple clients.
The advantage to generating unique client certificates is the ability to revoke a single certificate. Otherwise, if
multiple clients are using the same client certificate and you need to revoke it, you have to generate and install
new certificates for all the clients that use that certificate to authenticate.
You can generate client certificates using the following methods:
Enterprise certificate:
If you are using an enterprise certificate solution, generate a client certificate with the common name
value format 'name@yourdomain.com', rather than the 'domain name\username' format.
Make sure the client certificate is based on the 'User' certificate template that has 'Client Authentication'
as the first item in the use list, rather than Smart Card Logon, etc. You can check the certificate by
double-clicking the client certificate and viewing Details > Enhanced Key Usage.
Self-signed root certificate: It's important that you follow the steps in one of the P2S certificate articles
below. Otherwise, the client certificates you create won't be compatible with P2S connections and clients
receive an error when trying to connect. The steps in either of the following articles generate a compatible
client certificate:
Windows 10 PowerShell instructions: These instructions require Windows 10 and PowerShell to
generate certificates. The certificates that are generated can be installed on any supported P2S client.
MakeCert instructions: Use MakeCert if you don't have access to a Windows 10 computer to use to
generate certificates. MakeCert deprecated, but you can still use MakeCert to generate certificates. The
certificates that are generated can be installed on any supported P2S client.
When you generate a client certificate from a self-signed root certificate using the preceding instructions,
it's automatically installed on the computer that you used to generate it. If you want to install a client
certificate on another client computer, you need to export it as a .pfx, along with the entire certificate chain.
This creates a .pfx file that contains the root certificate information that is required for the client to
successfully authenticate. For steps to export a certificate, see Certificates - export a client certificate.

6. Add the client address pool


The client address pool is a range of private IP addresses that you specify. The clients that connect over a Point-to-
Site VPN dynamically receive an IP address from this range. Use a private IP address range that does not overlap
with the on-premises location that you connect from, or the VNet that you want to connect to.
1. Once the virtual network gateway has been created, navigate to the Settings section of the virtual network
gateway page. In the Settings section, click Point-to-site configuration.

2. Click Configure now to open the configuration page.

3. On the Point-to-site configuration page, in the Address pool box, add the private IP address range that
you want to use. VPN clients dynamically receive an IP address from the range that you specify. Click Save
to validate and save the setting.
NOTE
If you don't see Tunnel type or Authentication type in the portal on this page, your gateway is using the Basic SKU.
The Basic SKU does not support IKEv2 or RADIUS authentication.

7. Configure tunnel type


You can select the tunnel type. The two tunnel options are SSTP and IKEv2. The strongSwan client on Android and
Linux and the native IKEv2 VPN client on iOS and OSX will use only IKEv2 tunnel to connect. Windows clients try
IKEv2 first and if that doesn’t connect, they fall back to SSTP. You can choose to enable one of them or both. Select
the checkboxes that your solution requires.

8. Configure authentication type


Select Azure certificate.
9. Upload the root certificate public certificate data
You can upload additional trusted root certificates up to a total of 20. Once the public certificate data is uploaded,
Azure can use it to authenticate clients that have installed a client certificate generated from the trusted root
certificate. Upload the public key information for the root certificate to Azure.
1. Certificates are added on the Point-to-site configuration page in the Root certificate section.
2. Make sure that you exported the root certificate as a Base-64 encoded X.509 (.cer) file. You need to export the
certificate in this format so you can open the certificate with text editor.
3. Open the certificate with a text editor, such as Notepad. When copying the certificate data, make sure that
you copy the text as one continuous line without carriage returns or line feeds. You may need to modify
your view in the text editor to 'Show Symbol/Show all characters' to see the carriage returns and line feeds.
Copy only the following section as one continuous line:

4. Paste the certificate data into the Public Certificate Data field. Name the certificate, and then click Save.
You can add up to 20 trusted root certificates.

5. Click Save at the top of the page to save all of the configuration settings.
10. Install an exported client certificate
If you want to create a P2S connection from a client computer other than the one you used to generate the client
certificates, you need to install a client certificate. When installing a client certificate, you need the password that
was created when the client certificate was exported.
Make sure the client certificate was exported as a .pfx along with the entire certificate chain (which is the default).
Otherwise, the root certificate information isn't present on the client computer and the client won't be able to
authenticate properly.
For install steps, see Install a client certificate.

11. Generate and install the VPN client configuration package


The VPN client configuration files contain settings to configure devices to connect to a VNet over a P2S
connection. For instructions to generate and install VPN client configuration files, see Create and install VPN
client configuration files for native Azure certificate authentication P2S configurations.

12. Connect to Azure


To connect from a Windows VPN client

NOTE
You must have Administrator rights on the Windows client computer from which you are connecting.

1. To connect to your VNet, on the client computer, navigate to VPN connections and locate the VPN
connection that you created. It is named the same name as your virtual network. Click Connect. A pop-up
message may appear that refers to using the certificate. Click Continue to use elevated privileges.
2. On the Connection status page, click Connect to start the connection. If you see a Select Certificate
screen, verify that the client certificate showing is the one that you want to use to connect. If it is not, use
the drop-down arrow to select the correct certificate, and then click OK.
3. Your connection is established.

Troubleshoot Windows P2S connections


If you are having trouble connecting, check the following items:
If you exported a client certificate, make sure that you exported it as a .pfx file using the default value
'Include all certificates in the certification path if possible'. When you export it using this value, the root
certificate information is also exported. When the certificate is installed on the client computer, the root
certificate which is contained in the .pfx file is then also installed on the client computer. The client computer
must have the root certificate information installed. To check, go to Manage user certificates and
navigate to Trusted Root Certification Authorities\Certificates. Verify that the root certificate is listed.
The root certificate must be present in order for authentication to work.
If you are using a certificate that was issued using an Enterprise CA solution and are having trouble
authenticating, check the authentication order on the client certificate. You can check the authentication list
order by double-clicking the client certificate, and going to Details > Enhanced Key Usage. Make sure
the list shows 'Client Authentication' as the first item. If not, you need to issue a client certificate based on
the User template that has Client Authentication as the first item in the list.
For additional P2S troubleshooting information, see Troubleshoot P2S connections.
To connect from a Mac VPN client
From the Network dialog box, locate the client profile that you want to use, specify the settings from the
VpnSettings.xml, and then click Connect.
To verify your connection
These instructions apply to Windows clients.
1. To verify that your VPN connection is active, open an elevated command prompt, and run ipconfig/all.
2. View the results. Notice that the IP address you received is one of the addresses within the Point-to-Site
VPN Client Address Pool that you specified in your configuration. The results are similar to this example:

PPP adapter VNet1:


Connection-specific DNS Suffix .:
Description.....................: VNet1
Physical Address................:
DHCP Enabled....................: No
Autoconfiguration Enabled.......: Yes
IPv4 Address....................: 172.16.201.3(Preferred)
Subnet Mask.....................: 255.255.255.255
Default Gateway.................:
NetBIOS over Tcpip..............: Enabled

To connect to a virtual machine


These instructions apply to Windows clients.
You can connect to a VM that is deployed to your VNet by creating a Remote Desktop Connection to your VM.
The best way to initially verify that you can connect to your VM is to connect by using its private IP address, rather
than computer name. That way, you are testing to see if you can connect, not whether name resolution is
configured properly.
1. Locate the private IP address. You can find the private IP address of a VM by either looking at the
properties for the VM in the Azure portal, or by using PowerShell.
Azure portal - Locate your virtual machine in the Azure portal. View the properties for the VM. The
private IP address is listed.
PowerShell - Use the example to view a list of VMs and private IP addresses from your resource
groups. You don't need to modify this example before using it.
$VMs = Get-AzureRmVM
$Nics = Get-AzureRmNetworkInterface | Where VirtualMachine -ne $null

foreach($Nic in $Nics)
{
$VM = $VMs | Where-Object -Property Id -eq $Nic.VirtualMachine.Id
$Prv = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAddress
$Alloc = $Nic.IpConfigurations | Select-Object -ExpandProperty PrivateIpAllocationMethod
Write-Output "$($VM.Name): $Prv,$Alloc"
}

2. Verify that you are connected to your VNet using the Point-to-Site VPN connection.
3. Open Remote Desktop Connection by typing "RDP" or "Remote Desktop Connection" in the search box on
the taskbar, then select Remote Desktop Connection. You can also open Remote Desktop Connection using the
'mstsc' command in PowerShell.
4. In Remote Desktop Connection, enter the private IP address of the VM. You can click "Show Options" to adjust
additional settings, then connect.
To troubleshoot an RDP connection to a VM
If you are having trouble connecting to a virtual machine over your VPN connection, check the following:
Verify that your VPN connection is successful.
Verify that you are connecting to the private IP address for the VM.
Use 'ipconfig' to check the IPv4 address assigned to the Ethernet adapter on the computer from which you are
connecting. If the IP address is within the address range of the VNet that you are connecting to, or within the
address range of your VPNClientAddressPool, this is referred to as an overlapping address space. When your
address space overlaps in this way, the network traffic doesn't reach Azure, it stays on the local network.
If you can connect to the VM using the private IP address, but not the computer name, verify that you have
configured DNS properly. For more information about how name resolution works for VMs, see Name
Resolution for VMs.
Verify that the VPN client configuration package was generated after the DNS server IP addresses were
specified for the VNet. If you updated the DNS server IP addresses, generate and install a new VPN client
configuration package.
For more information about RDP connections, see Troubleshoot Remote Desktop connections to a VM.

To add or remove trusted root certificates


You can add and remove trusted root certificates from Azure. When you remove a root certificate, clients that have
a certificate generated from that root won't be able to authenticate, and thus will not be able to connect. If you
want a client to authenticate and connect, you need to install a new client certificate generated from a root
certificate that is trusted (uploaded) to Azure.
To add a trusted root certificate
You can add up to 20 trusted root certificate .cer files to Azure. For instructions, see the section Upload a trusted
root certificate in this article.
To remove a trusted root certificate
1. To remove a trusted root certificate, navigate to the Point-to-site configuration page for your virtual network
gateway.
2. In the Root certificate section of the page, locate the certificate that you want to remove.
3. Click the ellipsis next to the certificate, and then click 'Remove'.

To revoke a client certificate


You can revoke client certificates. The certificate revocation list allows you to selectively deny Point-to-Site
connectivity based on individual client certificates. This is different than removing a trusted root certificate. If you
remove a trusted root certificate .cer from Azure, it revokes the access for all client certificates generated/signed
by the revoked root certificate. Revoking a client certificate, rather than the root certificate, allows the other
certificates that were generated from the root certificate to continue to be used for authentication.
The common practice is to use the root certificate to manage access at team or organization levels, while using
revoked client certificates for fine-grained access control on individual users.
Revoke a client certificate
You can revoke a client certificate by adding the thumbprint to the revocation list.
1. Retrieve the client certificate thumbprint. For more information, see How to retrieve the Thumbprint of a
Certificate.
2. Copy the information to a text editor and remove all spaces so that it is a continuous string.
3. Navigate to the virtual network gateway Point-to-site-configuration page. This is the same page that you
used to upload a trusted root certificate.
4. In the Revoked certificates section, input a friendly name for the certificate (it doesn't have to be the
certificate CN ).
5. Copy and paste the thumbprint string to the Thumbprint field.
6. The thumbprint validates and is automatically added to the revocation list. A message appears on the screen
that the list is updating.
7. After updating has completed, the certificate can no longer be used to connect. Clients that try to connect using
this certificate receive a message saying that the certificate is no longer valid.

Point-to-Site FAQ
What client operating systems can I use with Point-to -Site?
The following client operating systems are supported:
Windows 7 (32-bit and 64-bit)
Windows Server 2008 R2 (64-bit only)
Windows 8.1 (32-bit and 64-bit)
Windows Server 2012 (64-bit only)
Windows Server 2012 R2 (64-bit only)
Windows Server 2016 (64-bit only)
Windows 10
Mac OS X version 10.11 (El Capitan)
Mac OS X version 10.12 (Sierra)
Linux (StrongSwan)
iOS
NOTE
Starting July 1, 2018, support is being removed for TLS 1.0 and 1.1 from Azure VPN Gateway. VPN Gateway will support
only TLS 1.2. To maintain TLS support and connectivity for your Windows 7 and Windows 8 point-to-site clients that use
TLS, we recommend that you install the following updates:
• Update for Microsoft EAP implementation that enables the use of TLS
• Update to enable TLS 1.1 and TLS 1.2 as default secure protocols in WinHTTP
The following legacy algorithms will also be deprecated for TLS on July 1, 2018:
RC4 (Rivest Cipher 4)
DES (Data Encryption Algorithm)
3DES (Triple Data Encryption Algorithm)
MD5 (Message Digest 5)
SHA-1 (Secure Hash Algorithm 1)

How many VPN client endpoints can I have in my Point-to -Site configuration?
We support up to 128 VPN clients to be able to connect to a virtual network at the same time.
Can I traverse proxies and firewalls using Point-to -Site capability?
Azure supports two types of Point-to-site VPN options:
Secure Socket Tunneling Protocol (SSTP ). SSTP is a Microsoft proprietary SSL -based solution that can
penetrate firewalls since most firewalls open the TCP port that 443 SSL uses.
IKEv2 VPN. IKEv2 VPN is a standards-based IPsec VPN solution that uses UDP port 500 and 4500 and IP
protocol no. 50. Firewalls do not always open these ports, so there is a possibility of IKEv2 VPN not being
able to traverse proxies and firewalls.
If I restart a client computer configured for Point-to -Site, will the VPN automatically reconnect?
By default, the client computer will not reestablish the VPN connection automatically.
Does Point-to -Site support auto -reconnect and DDNS on the VPN clients?
Auto-reconnect and DDNS are currently not supported in Point-to-Site VPNs.
Can I have Site -to -Site and Point-to -Site configurations coexist for the same virtual network?
Yes. For the Resource Manager deployment model, you must have a RouteBased VPN type for your gateway. For
the classic deployment model, you need a dynamic gateway. We do not support Point-to-Site for static routing
VPN gateways or PolicyBased VPN gateways.
Can I configure a Point-to -Site client to connect to multiple virtual networks at the same time?
No. A Point-to-Site client can only connect to resources in the VNet in which the virtual network gateway resides.
How much throughput can I expect through Site -to -Site or Point-to -Site connections?
It's difficult to maintain the exact throughput of the VPN tunnels. IPsec and SSTP are crypto-heavy VPN
protocols. Throughput is also limited by the latency and bandwidth between your premises and the Internet. For a
VPN Gateway with only IKEv2 Point-to-Site VPN connections, the total throughput that you can expect depends
on the Gateway SKU. For more information on throughput, see Gateway SKUs.
Can I use any software VPN client for Point-to -Site that supports SSTP and/or IKEv2?
No. You can only use the native VPN client on Windows for SSTP, and the native VPN client on Mac for IKEv2.
Refer to the list of supported client operating systems.
Does Azure support IKEv2 VPN with Windows?
IKEv2 is supported on Windows 10 and Server 2016. However, in order to use IKEv2, you must install updates
and set a registry key value locally. OS versions prior to Windows 10 are not supported and can only use SSTP.
To prepare Windows 10 or Server 2016 for IKEv2:
1. Install the update.

OS VERSION DATE NUMBER/LINK

Windows Server 2016 January 17, 2018 KB4057142


Windows 10 Version 1607

Windows 10 Version 1703 January 17, 2018 KB4057144

2. Set the registry key value. Create or set


“HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RasMan\
IKEv2\DisableCertReqPayload” REG_DWORD key in the registry to 1.
What happens when I configure both SSTP and IKEv2 for P2S VPN connections?
When you configure both SSTP and IKEv2 in a mixed environment (consisting of Windows and Mac devices), the
Windows VPN client will always try IKEv2 tunnel first, but will fall back to SSTP if the IKEv2 connection is not
successful. MacOSX will only connect via IKEv2.
Other than Windows and Mac, which other platforms does Azure support for P2S VPN?
Azure supports only Windows and Mac for P2S VPN.
I already have an Azure VPN Gateway deployed. Can I enable RADIUS and/or IKEv2 VPN on it?
Yes, you can enable these new features on already deployed gateways using Powershell or the Azure portal,
provided that the gateway SKU that you are using supports RADIUS and/or IKEv2. For example, the VPN
gateway Basic SKU does not support RADIUS or IKEv2.
Can I use my own internal PKI root CA for Point-to -Site connectivity?
Yes. Previously, only self-signed root certificates could be used. You can still upload 20 root certificates.
What tools can I use to create certificates?
You can use your Enterprise PKI solution (your internal PKI), Azure PowerShell, MakeCert, and OpenSSL.
Are there instructions for certificate settings and parameters?
Internal PKI/Enterprise PKI solution: See the steps to Generate certificates.
Azure PowerShell: See the Azure PowerShell article for steps.
MakeCert: See the MakeCert article for steps.
OpenSSL:
When exporting certificates, be sure to convert the root certificate to Base64.
For the client certificate:
When creating the private key, specify the length as 4096.
When creating the certificate, for the -extensions parameter, specify usr_cert.

Next steps
Once your connection is complete, you can add virtual machines to your virtual networks. For more information,
see Virtual Machines. To understand more about networking and virtual machines, see Azure and Linux VM
network overview.
For P2S troubleshooting information, Troubleshooting Azure point-to-site connections.

Vous aimerez peut-être aussi