Académique Documents
Professionnel Documents
Culture Documents
Executive Summary
This document provides the physical design and configurations for the Windows Server 2012 with Hyper-V (Hyper-V) and the System Center 2012 Virtual Machine Manager (SCVMM) Technology streams for the Public Protector South Africa (PPSA) platform upgrade project. The design and configuration of these two (2) components will provide a standard for extending their virtualization capacity based on future requirements as the business grow. The PPSA already purchased a pre-designed and configured Dell vStart 200 which will deployed and configured as their first scale-unit in the their datacenter. The virtualization capabilities will be made available through the deployment of Windows Server 2012 with Hyper-V on the standardized scale-unit in the PPSA. The management layer will be build using System Center 2012 SP1 and this document will include the design components for System Center 2012 Virtual Machine Manager.
128GB RAM
Intel Processor
Intel Processor
Expansion Cards 1GB NIC 1GB NIC 1GB NIC 1GB NIC 1GB NIC
NIC
iSCSI
iSCSI
iSCSI
iSCSI
iLO
The primary operating system (OS) installed on the host will be Windows Server 2012 Datacenter Edition with the following roles and features enabled.
2.1.1
Required Roles
Hyper-V with the applicable management tools thats automatically selected. 2.1.2 Required Features
Failover Clustering with the applicable tools thats automatically selected Multipath I/O
2.1.3
The bios of the individual hosts need to be upgraded to the latest release version and the following options needs to be enabled:
Processor Settings Virtualization Technology must be Enabled Execute Disable must be enabled 2.1.4 BMC Configuration
The baseboard management controller (BMC) needs to be configured to allow for out-of-bandmanagement of the hosts and to allow System Center 2012 Virtual Machine Manager (SCVMM) to discover the physical computer. This will be used for bare-metal provisioning of the host and management from SCVMM. The BMC must support any one of the following out-of-band management protocols:
Intelligent Platform Management Interface (IPMI) versions 1.5 or 2.0 Data Center Management Interface (DCMI) version 1.0 System Management Architecture for Server Hardware (SMASH) version 1.0 over WSManagement (WS-Man) DRAC Configuration The following table provides the detailed DRAC configuration:
Model Host Name IP Subnet Gateway VLAN Enable Protocol IPMI IPMI IPMI IPMI IPMI IPMI IPMI
1 1 1 1 1 1 1
The following details have been configured to gain access to the DRAC controller for the individual hosts:
Username root
Table 2: DRAC Credentials
2.1.5
The following table provides the detailed Host and Hyper-V cluster network configuration once the LBFO team is established:
Model R620 Host Name OHOWVSMAN Host Type Management Management Interface IP: 10.131.133.39 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV01 Virtualization Host IP: 10.131.133.41 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV02 Virtualization Host IP: 10.131.133.42 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV03 Virtualization Host IP: 10.131.133.43 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV04 Virtualization Host IP: 10.131.133.44 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV05 Virtualization Host IP: 10.131.133.45 Subnet: 255.255.255.0 Gateway: 10.131.133.1 R720 OHOWVSHV06 Virtualization Host IP: 10.131.133.46 Subnet: 255.255.255.0 Gateway: 10.131.133.1 Hyper-V OHOWVSCV01 Hyper-V Cluster Name IP: 10.131.133.40 Subnet: 255.255.255.0 Gateway: 10.131.133.1 1 1 1 1 1 1 1 VLAN 1
2.1.6
The following table provides the detailed private network configuration for the Cluster and Live Migration Networks that will be created as virtual interfaces once the LBFO team is established. The private network interfaces will be disabled from registering in DNS.
Hosts Cluster Network Cluster VLAN 6 Live Migration Network Live Migrate VLAN 7
OHOWVSHV01
OHOWVSHV02
OHOWVSHV03
OHOWVSHV04
OHOWVSHV05
OHOWVSHV06
2.1.7
The following security design principles needs to be taken into consideration when designing a virtualization solution build using Hyper-V. The section below provides the details on the decisions taken for the PPSA thats based on their skill and requirements.
Security Consideration Design Decision
Reducing the attack footprint of the Windows Server The PPSA does not have the required knowledge of operating system by installing Windows Server Core. PowerShell to manage Windows Server Core. Create and apply Hyper-V specific group policies to disable any unnecessary ports and/or features. The recommended Windows Server 2012 Hyper-V group policy will be extracted from the Microsoft Security and Compliance Manager and applied to all the Hyper-V hosts. The group policy will be imported into Active Directory and applied on an organization unit where all the Hyper-V hosts resides. The following group will be created in Active Directory: GG-HyperV-Admins. The group will be added to the Hyper-V group policy discussed earlier to add it to the local Hyper-V Administrators group for each of the Hyper-V hosts. This group will have only the required Hyper-V administrators in the PPSA. System Center 2012 Endpoint Protection (SCEP) will be deployed and managed by System Center 2012 Configurations Manager. When SCEP is installed on a Hyper-V host it will automatically configure the exclusions for the virtual machine data locations as it inherits it from the Hyper-V host. The clustered shared volumes (CSV) where the virtual machine data will reside will not be encrypted.
Limit the Hyper-V operators to only manage the virtualization layer and not the operating system itself by adding the required users to the Hyper-V Administrators group on the local Hyper-V server.
Install antivirus on the Hyper-V servers and add exclusions for the locations where the hypervisor stores the virtual machine profiles and virtual hard drives.
Encrypt the volumes using BitLocker where the virtual machine data is stored. This is required for virtualization hosts where physical security is a constraint.
Table 5: Hyper-V Security Design Decisions
The creation and deployment of the required group policies and organization units needs to go through the standard change process to make sure thats in a managed state and created in the correct location.
2.2 Network
The following section provides the detailed design for the network configuration of the hosts and the switches used in the solution. The design and configuration is aimed at simplifying the management of the networks while providing a load balanced network for the management OS and virtual machines.
2.2.1
The six (4) available network cards for dataflow per host will be used to create a single network team thats switch independent and it will be configured to use the Hyper-V switch port traffic distribution algorithm. This will allow for the offload of Virtual Machine Queues (VMQs) directly to the NIC and will distribute inbound and outbound traffic evenly across the team members because there will be more VMs than available networks on the hosts. Figure 2 provides a logical view of how the network team will be configured and what virtual interfaces the management OS requires to enable the hosts to be configured as a failover cluster. The network team will also be used for the virtual machine traffic and a virtual network switch will be created to allow communication from the virtual environment to the production environment.
Management OS
VM 1 Management VM n
Cluster
Live Migration
Network Team
The following classic networks architecture will be required for implementing a failover cluster:
VHD File(s)
Pass-through Disk(s)
Node 1
Node 2
Node 3
Node n
The Host Management Network: This network is used for managing the Hyper-V host. This
type of configuration is recommended because it allows managing Hyper-V services whatever the type of network workload generated by hosted virtual machines.
The Cluster Heartbeat Network: This network is used by Failover Cluster Services to check
that each node of the cluster is available and working perfectly. This network can also be used per a cluster node to access to its storage through another node if direct connectivity to the SAN is lost (Dynamic IO Redirection). It is recommended to use dedicated network equipment for the Cluster Heartbeat Network to get the best availability of the failover cluster service.
The Live Migration Network: This network is used for Live Migration of virtual machines
between two nodes. This live migration process is particularly interesting for planned maintenance operation on host because it allows to move virtual machines between two cluster nodes with no or few network connectivity lost. The network bandwidth directly influence the time needed to Live Migrate a Virtual Machine. For this reasons it is recommended to use the fastest possible connectivity and, like the Cluster Heartbeat Network, it is recommended to use dedicated network equipment.
Virtual Machines Networks: These networks are used for Virtual Machine Connectivity. Most
of the time, Virtual Machines require multiples networks. This can be addressed by using several network cards dedicated to virtual machines workloads or by implementing V-LAN Tagging and V-LAN Isolation on a high-speed network. Most of the time, the host parent partition is not connected to those networks. This approach prevents using unnecessary TCP/IP Address and re-enforce the isolation of the Parent Partition from host Virtual Machines.
The following steps need to be follow to create the LBFO team with the required virtual adapters per Hyper-V host: 1. Create the switch independent Hyper-V Port LBFO Team called: Team01 using the Windows Server 2012 NIC Teaming Software 2. Create the Hyper-V switch called: vSwitch and do not allow the Management OS to create an additional virtual adapter using the Hyper-V Manager 3. Create the virtual adapters for the Management OS as illustrated in Figure 2 and assign the required VLANs. 4. Configure the network interfaces with the information described in Table 3. There will be a minimum network bandwidth assigned to the network interface and managed by the QoS Packet Scheduler in Windows. The traffic will be separated by VLANs which will allow for optimal usage of the available network connections. The following table provides the network configuration per host:
Network Interface Management Cluster Live Migration Virtual Switch Name Management Cluster LM vSwitch Minimum Bandwidth 5 5 20 1 IP Table 3 Table 4 Table 4 none VLAN 1 6 7 Native (1)
The following IP addresses will be assigned to the iSCSI network on each host to allow for communication to the SANs.
Host Name OHOWVSHV01 OHOWVSHV02 OHOWVSHV03 OHOWVSHV04 OHOWVSHV05 OHOWVSHV06 iSCSI NIC 1 10.10.5.51 10.10.5.61 10.10.5.71 10.10.5.81 10.10.5.91 10.10.5.101 iSCSI NIC 2 10.10.5.52 10.10.5.62 10.10.5.72 10.10.5.82 10.10.5.92 10.10.5.102 iSCSI NIC 3 10.10.5.53 10.10.5.63 10.10.5.73 10.10.5.83 10.10.5.93 10.10.5.103 iSCSI NIC 4 10.10.5.54 10.10.5.64 10.10.5.74 10.10.5.84 10.10.5.94 10.10.5.104 Subnet 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 VLAN 5 5 5 5 5 5
Jumbo Frames will be enabled on the iSCSI network cards and SAN controllers to increase data performance through the network. The frame size will be set to 9014 bytes.
2.2.2
Switch Configuration
The Dell vStart 200 has four (4) Dell PowerConnect 7048 switches which will be used for Management and iSCSI traffic. Two (2) of the switches will connected to each other and need to be configured as trunk ports with a native VLAN ID of 5 because the switches will be used as the Storage Network. The other two (2) switches also needs to be connected to each other but the switch ports needs to be configured as trunk ports with encapsulation and with dot1q. The native VLAN needs to be set per port and a VLAN ID range needs to be tagged per port to allow for isolated communication between the required management OS interfaces and the production workloads. The following figure provides the detail on the switch and connection layout:
Storage Network
Port 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
BG-iSCSI-01
Port Port
2 1
4 3
6 5
8 7
10 9
12 11
14 13
16 15
18 17
20 19
22 21
24 23
26 25
28 27
30 29
32 31
34 33
36 35
38 37
40 39
42 41
44 43
46 45
48 47
Port
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
Production Network
Port 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
Uplink Trunk
BG-LAN-01
Port Port
2 1
4 3
6 5
8 7
10 9
12 11
14 13
16 15
18 17
20 19
22 21
24 23
26 25
28 27
30 29
32 31
34 33
36 35
38 37
40 39
42 41
44 43
46 45
48 47
Uplink Trunk
Port
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
LBFO Team Members for all Hosts DRAC Members iSCSI Connectors for all hosts
iSCSI Targets
iSCSI Management
Free Ports
The connections from each source device is divided between the required destination switches. This is why the LBFO team needs to be created as switch independent because the team cant be managed nor created on the switch.
PC7048 iSCSI Stack OoB Management BG-ISCSI-01 PC7048 LAN Stack OoB Management BG-LAN-01
The following network VLANs will be used in the Dell vStart 200 for isolating network traffic:
VLAN ID 5 1 1 6 7 1 Name iSCSI OoB Management Management Cluster Live Migration Virtual Machines
The following network parameters have been identified for the platform upgrade project:
Network Parameters Primary DNS NTP SMTP 10.131.133.1 10.131.133.1 10.131.133.8 Secondary 10.131.133.2
2.3 Storage
The Dell vStart 200 shippes with three (3) Dell Equalogics PS6100 iSCSI SANs with 24 x 600GB spindles each. That provides a total RAW capacity of 14.4TB per SAN and a total of 43.2TB RAW storage for PPSA to use. The recommended raid set configuration for each SAN will be the 50 Raid set across each SAN. This is to have a balance between storage capacity and acceptable read/write speed. Each SAN array is connected with four (4) iSCSI connections to the storage network as demonstrated in Figure 4. This allow for four (4) iSCSI data paths to flow from the SAN array to the Hyper-V hosts. This helps with connection redundancy and data performance because of the multiple iSCSI paths. Each of the Hyper-V host will be connected to the SAN array through four (4) network connections thats connected to the storage network as demonstrated in Figure 4. Multipath I/O (MPIO) will be enabled to allow for redundancy and to increase performance with an active/active path through all four (4) iSCSI connections. The Dell HIT Toolkit will be used to establish and manage MPIO on each host. The following diagram provides a logical view of how the storage will be configured:
VHD Files
LUNs
Raid Sets
Hyper-V Host
Figure 5: Storage Configuration
After applying Raid 50 to the SAN arrays PPSA will only have 9TB available per array. There will be two (2) LUNs carved per SAN array with the size of 3TB each. The six (6) LUNs will be presented to each of the six (6) Hyper-V hosts for storing the virtual machine data. The following table provides the three (3) Equalogic SAN array configuration detail:
EQL Storage Array Name EQL Group Name Group Management EQL Array Name eth0 eth1 eth2 eth3 Management EQL Array Name eth0 eth1 eth2 eth3 Management EQL Array Name eth0 eth1 eth2 eth3 Management
Table 11: SAN Array Configuration
IP
Subnet
Gateway
VLAN
BG-EQL-GRP01 10.10.5.10 BG-EQL-ARY01 10.10.5.11 10.10.5.12 10.10.5.13 10.10.5.14 10.131.133.24 BG-EQL-ARY02 10.10.5.21 10.10.5.22 10.10.5.23 10.10.5.24 10.131.133.25 BG-EQL-ARY03 10.10.5.31 10.10.5.32 10.10.5.33 10.10.5.34 10.131.133.27 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 10.131.133.1 5 5 5 5 1 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 10.131.133.1 5 5 5 5 1 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 10.131.133.1 5 5 5 5 1 255.255.255.0 5
The storage will be carved and presented to all the host in the six (6) node cluster as discussed in Table 3. This will allow for the storage to be assigned as Clustered Shared Volumes (CSV) where the virtual hard drives (VHD) and virtual machine profiles will reside. A cluster quorum disk will be presented to allow for the cluster configuration to be stored. The following table provides the storage configuration for the solution:
Disk Name HyperV-Quorum HyperV-CSV-1 HyperV-CSV-2 HyperV-CSV-3 HyperV-CSV-4 HyperV-CSV-5 HyperV-CSV-6 Name Witness Disk CSV01 CSV02 CSV03 CSV04 CSV05 CSV06 Storage Array BG-EQL-ARY01 BG-EQL-ARY01 BG-EQL-ARY01 BG-EQL-ARY02 BG-EQL-ARY02 BG-EQL-ARY03 BG-EQL-ARY03 Size 1GB 3TB 3TB 3TB 3TB 3TB 3TB Raid Set Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Preferred Owner OHOWVSHV06 OHOWVSHV01 OHOWVSHV02 OHOWVSHV03 OHOWVSHV04 OHOWVSHV05 OHOWVSHV06
Shared Storage
Active
Virtual Hard Disks (VHD)
Failover
Active Failover
Virtual Hard Disks (VHD)
6 Cluster Nodes
Active
Failover Passive
This allows for six (6) nodes to be configured in a Hyper-V failover cluster thats connected to all the shared storage and networks thats configured correctly. In the failover cluster will be configured with five (5) active nodes and one (1) passive/reserve node for failover of virtual machines and for patch management of the Hyper-V hosts.
System Center 2012 Virtual Machine Manager System Center 2012 Operations Manager
The management infrastructure itself will be hosted on the scale units deployed for the solution and a highly available SQL server will be deployed for the System Center databases.
Specification SQL Server Node Virtual Windows Server 2008 R2 SP1 64-bit SQL Server 2008 R2 SP1 with CU6 Enterprise Edition 8 Cores 16 GB RAM 2 x Virtual NICs (Public and Cluster Networks) 80 GB Operating System 1GB Quorum Disk Disks presented to the SQL virtual machines as outlined in section 4.1.1.
4.1.1
Microsoft System Center 2012 components are database-driven applications. This makes a wellperforming database platform critical to the overall management of the environment. The following instances will be required to support the solution:
Management Tool System Center 2012 Virtual Machine Manager System Center 2012 Operations Manager Instance Name VMM Primary Database VirtualMachineManagerDB Authentication Windows
OM_OPS OM_DW
OperationsManager OperationsManagerDW
Windows Windows
The following disk configuration will be required to support the Management solution:
SQL Instance VMM LUN LUN 1 LUN 2 LUN 3 OM_OPS LUN 4 LUN 5 LUN 6 OM_DW LUN 7 LUN 8 LUN 9 Purpose Database Files Database Log Files TempDB Files Database Files Database Log Files TempDB Files Database Files DB Log Files TempDB Files Size 50 GB 25 GB 25 GB 25 GB 15 GB 15 GB 800 GB 400 GB 50 GB Raid Set Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50
4.1.2
Microsoft SQL Server requires service accounts for starting the database and reporting services required by the management solution. The following service accounts will be required to successfully install SQL server:
Service Account Purpose SQL Server Username SQLsvc Password The Password will be given in a secure document. The Password will be given in a secure document.
Reporting Server
SQLRSsvc
4.2.1
Scope
System Center 2012 Virtual Machine Manager will be used to manage Hyper-V hosts and guests in the datacenters. No virtualization infrastructure outside of the solution should be managed by this instance of System Center 2012 Virtual Machine Manager. The System Center 2012 Virtual Machine Manager configuration is only considering the scope of this architecture and therefore may suffer performance and health issues if that scope is changed.
4.2.2
Servers
4.2.3
SCVMM Management Server SCVMM Administrator Console Command Shell SCVMM Library SQL Server 2008 R2 Client Tools
4.2.4
The following software must be installed prior to installing the SCVMM management server.
Software Requirement Operating System .NET Framework 4.0 Windows Assessment and Deployment Kit (ADK) for Windows 8 Notes Windows Server 2012 Included in Windows Server 2012. Windows ADK is available at the Microsoft Download Center.
4.2.5
The following software must be installed prior to installing the SCVMM console.
Software requirement Notes
A supported operating system for the VMM console Windows Server 2012 and/or Windows 8 Windows PowerShell 3.0 Windows PowerShell 3.0 is included in Windows Server 2012. .NET Framework 4 is included in Windows 8, and Windows Server 2012.
4.2.6
Hyper-V Hosts System Requirements SCVMM supports the following versions of Hyper-V for managing hosts.
Operating System Edition Service Pack Service Pack 1 or earlier System Architecture x64
Windows Server 2008 R2 Enterprise and Datacenter (full installation or Server Core-MiniShell installation) Hyper-V Server 2008 R2 Windows Server 2012
Table 21: Hyper-V Hosts System Requirements
Not applicable
x64
4.2.7
Libraries are the repository for VM templates and therefore serve a very important role. The Library share itself will reside on the SCVMM server in the default architecture; however, it should have its own logical partition and corresponding VHD whose underlying disk subsystem is able to deliver the required level of performance to service the provisioning demands. This level of performance depends on:
Number of tenants Total number of templates and VHDs Size of VHDs How many VMs may be provisioned simultaneously What the SLA is on VM provisioning Network constraints
4.2.8
In addition to the built-in roles, SCVMM will be integrated with System Center 2012 Operations Manager. The integration will enable Dynamic Optimization and Power Optimization in SCVMM. SCVMM can perform load balancing within host clusters that support live migration. Dynamic Optimization migrates virtual machines within a cluster according to settings you enter. SCVMM can also help to save power in a virtualized environment by turning off hosts when they are not needed and turning the hosts back on when they are needed. SCVMM supports Dynamic Optimization and Power Optimization on Hyper-V host clusters and on host clusters that support live migration in managed VMware ESX and Citrix XenServer environments. For Power Optimization, the computers must have a baseboard management controller (BMC) that enables out-of-band management. The integration into Operation Manager will be configured with the default thresholds and dynamic optimization will be enabled. Dynamic Power Optimization schedule will be enabled from 8PM to 5AM.
4.2.9
The following service account will be required for SCVMM and to integrate into Operation Manager and to manage the Hyper-V Hosts:
Purpose SCVMM Service Account Username SCVMMsvc Password The Password will be given in a secure document.
The service account will also be made local Administrator on each Hyper-V and SCVMM machine to allow for effective management.
Virtual machine hosts Library servers VMM management server PXE servers The WSUS server
PPSA can configure update baselines, scan computers for compliance, and perform update remediation. The SCVMM will use WSUS that will be deployed with System Center 2012 Configurations Manager. Additional configuration will be required and discussed in the deployment and configuration guides.
Template 2 Medium
VLAN 1
Template 3 Large
VLAN 1
Template 4 Small
VLAN 1
Template 5 Medium
VLAN 1
Template 6 Large
VLAN 1
128GB RAM
Intel Processor
Intel Processor
Expansion Cards
Onboard NICs 1GB NIC 1GB NIC 1GB NIC 1GB NIC 1GB NIC 1GB NIC
1GB NIC
1GB NIC
1GB NIC
1GB NIC
NIC
iSCSI
iSCSI
iSCSI
iLO
If additional network cards cannot be acquired by PPSA then the current host design stays valid and all four (4) of the iSCSI network adapters needs to be share with the virtual environment. The Jumbo Frame size must also be set in the guest cluster virtual machines on each of the virtual iSCSI interfaces to 9014 bytes as well to take advantage of the performance benefits. Virtual iSCSI target providers will not be implemented in the solution because of the performance impact on the other guest machines.
The remainder of this section will provide the detail design for the PPSA SQL Server 2012 AlwaysOn Availability group.
6.1.1
The newly created virtual machines must be clustered to allow SQL Server 2012 to create an AlwyasOn availability group. The following table provides the management and cluster network details.
Name Type Management Network VLAN Cluster Network VLAN
OHOWSQLS01
Virtual Machine
OHOWSQLS02
Virtual Machine
OHOWSQLC01
Cluster Name
None
The Quorum configuration will be configured as node majority after establishing the Windows Server 2012 cluster because shared storage isnt available. This is however not optimal and the Witness disk must be configured using node and file share majority. This will allow the Windows Cluster to save the cluster configuration and to vote for the cluster health. The file share requires only 1024MB of storage and can be located on the new file services. The following file share can be created: \\fileserver\ OHOWSQLC01\Witness Disk. Both the SQL Server virtual machines names and Windows cluster name must have full read/write access to the share.
6.2.1
Feature Selection
The following features will be required when installing SQL Server 2012 SP1 to allow for AlwaysOn availability Groups: Instance Features:
Shared Features
Client Tools Connectivity Client Tools Backwards Compatibility Management Tools Complete
The shared features will be installed on C:\Program Files\Microsoft SQL Server\.
6.2.2
VirtualMachineManagerDB
SharePoint
SharePointDB
Windows
8GB
SQL02
The instance root directory for all instances will be C:\Program Files\Microsoft SQL Server\.
The following disk configuration will be required and must be presented to both the virtual machines as fixed disks.
SQL Instance SCVMM LUN LUN 1 LUN 2 SCOM_OPS LUN 3 LUN 4 SCOM_DW LUN 5 LUN 6 SCCM LUN 7 LUN 8 Purpose Database and Temp Files Database and Temp Log Files Database and Temp Files Database and Temp Log Files Database and Temp Files Database and Temp Log Files Database and Temp Files Database and Temp Log Files Size 50 GB 25 GB 25 GB 15 GB 800 GB 400 GB 700 GB 350 GB Raid Set Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Raid 50 Drive Letter E F G H I J K L
When installing the individual instances the data root directory will be C:\Program Files\Microsoft SQL Server\ and the individual database, Temp DB, database logs and Temp DB logs directories will be installed to the correct drive letters as outline in Table 27.
6.2.3
Microsoft SQL Server requires service accounts for starting the database and reporting services and SQL administrator groups for allowing management of SQL. The following service accounts and Group will be required to successfully install SQL server:
Service Account / Group Purpose SQL Server Name SQLsvc Password The Password will be given in a secure document. The Password will be given in a secure document. None
Reporting Server
SQLRSsvc
SQL Admins
The SQL Admin groups must contain all the required SQL administrator to allow them to manage SQL Server 2012 SP1.
6.2.4
The following section provides the design detail for the SQL Server 2012 AlwaysOn Availability groups. Before creating the SQL availability groups, PPSA must backup all the existing databases and all the databases must be enabled for full recovery mode. The individual SQL Server instances must also be enabled to use AlwaysOn availability Groups in the SQL Server Configurations Manager.
The following table provides the availability group configuration. This configuration needs to be done by connecting to the individual SQL instances.
Availability Databases in Group Primary Group Name Server System Center 2012 VirtualMachine ManagerDB SQL01 Replica Server SQL02 Replica Configuration Automatic Failover Listener
Operations ManagerDB
SQL01
SQL02
Automatic Failover
Operations ManagerDW
SQL01
SQL02
Automatic Failover
Configurations ManagerDB
SQL02
SQL01
Automatic Failover
SharePoint
SharePointDB
SQL02
SQL01
Automatic Failover
When creating the availability group there will be a requirement for a file share to do the initial synchronization of the database. A temporary share can be be established on the file server called \\fileshare\SQLSync.