Académique Documents
Professionnel Documents
Culture Documents
0 Plan and
Design
1.1 Network
Design
1.1.1 Internet Access
1.1.1.1 Device research
1.1.1.2 Speed requirements
1.1.2 Branch site design
1.1.2.1 Device research
1.1.2.2 Voice infrastructure design
1.1.3 Site connectivity
1.1.3.1 Connectivity requirements
1.1.4 HQ Office and Manufacturing Design
1.1.4.1 Device research
1.1.4.2 Wireless design/requirements
1.1.4.3 Distribution/Access design
1.1.4.4 Voice infrastructure design
1.1.5 Datacenter Network Design
1.1.5.1 Device research
1.1.5.2 Network core design
1.1.5.3 Distribution/Access design
1.1.5.4 Device management design
1.1.5.5 Voice infrastructure design
1.1.6 Security Plan
1.1.6.1 DMZ Security Requirements
1.1.6.2 Device research
1.1.6.3 Isolated device eligibility
1.1.6.4 Internet edge security design
1.1..6.5 Datacenter/Branch security design
1.1.7 Service Provider Research
1.1.7.1 Available service providers in each region
1.1.7.2 Common speeds available for business class cable and
DSL
1.1.7.3 Availability of fiber providers
1.2 Systems
Design
1.2.1 Server Requirements
1.2.1.1 Performance requirements
1.2.1.2 Server research
1.2.2 SAN Requirements
1.2.2.1 Storage needs
1.2.2.2 Connectivity requirements
1.2.2.3 Expandability
1.2.2.4 Device research
1.2.3 Terminal Services & File Server Design
1.2.3.1 Terminal Services design
1.2.3.2 Terminal Services performance requirements
1.2.3.3 Domain Controller placement/design
1.2.3.4 File server location
1.2.3.5 File share design
1.2.3.6 DFS design
1.2.4 Vmware Design
1.2.4.1 vCenter/vMotion design
1.2.4.2 VM backup requirements
1.2.4.3 Vmware software research
1.2.5 Workstation Requirements
1.2.5.1 Remote worker workstation research
1.2.5.2 Administrative workstation research
1.2.5.3 Manufacturing workstation research
1.2.5.4 Printer requirements
1.2.5.5 Printer requirements
1.2.5.6 Printer locations
1.3 Software Requirements
1.3.1 Productivity Software
1.3.1.1 Productivity Software availability requirements
1.3.1.2Productivity research
1.3.2 Helpdesk Software
1.3.2.1 Feature requirements
1.3.2.2 Employee numbers
1.3.2.3 Helpdesk research
1.3.3 Imaging Software
1.3.3.1 Workstation models
1.3.3.2 Workstation numbers
1.3.3.3 Workstation locations
1.3.3.4 Imaging Software research
1.3.3.5 Imaging process/design
1.3.4 Backup Solutions
1.3.4.1 Vmware backup requirements
1.3.4.2 Exchange/DC backup requirements
1.3.4.3 SQL Server backup requirements
1.3.4.4 VM Backup software research
1.3.4.5 Exchange/DC backup software research
1.3.4.6 SQL Server backup requirements
1.4 RFP Response
1.4.1 RFP Response Draft
1.4.1.1 Network Design Draft
1.4.1.2 Systems Design Draft
1.4.1.3 Software Requirements Draft
1.4.2 RFP Response Final
1.4.2.1 RFP Response Final Copy
1.4.2.2 RFP Response Final Editing
1.4.2.3 RFP Response Submission
1.5 Project Plan
1.5.1 Project Plan Draft
1.5.1.1 Project Scope Draft
1.5.1.2 Risk Register Draft
1.5.1.3 Project Schedule Draft
1.5.1.4 Work Breakdown Structure/Gantt Chart Draft
1.5.1.5 Stakeholder Management Plan Draft
1.5.2 Project Plan Final
1.5.2.1 Project Scope Final
1.5.2.2 Risk Register Final
1.5.2.3 Project Schedule Final
1.5.2.4 Work Breakdown Structure/Gantt Chart Final
1.5.2.5 Stakeholder Management Plan Final
1.5.2.6 Project Plan Assembly
1.5.2.7 Project Plan Final Review/Edit
1.5.2.8 Project Plan Submission
2.0 IP Address Scheme
2.1 Register for public addresses with ARIN
2.1.2 Contact ARIN for address allocation
2.1.3 Get address costs
2.1.4 Arrange payment for address allocation
2.2 Choose Internal Address Range
2.2.1 Address requirements
2.2.2 Subnet sizing
2.2.3 Summarization points
2.2.4 Summarization addresses
2.3 Subnet/Supernet Ranges
2.3.1 Subnet range final allocation
3.0 VLAN/Switch Layout
3.1 Reference VLAN requirements from planning stage
3.2 Identify VLAN ranges
3.2.1 Management VLAN requirements
3.2.2 Internet edge VLAN assignment
3.2.3 Branch/Sales office VLAN assignment
3.2.4 HQ/Manufacturing VLAN assignment
4.0 Site to Site Communication Plan
4.1 Service provider negotiations
4.1.1 SP initial contact
4.1.2 SP contract negotiation
4.1.3 SP contract finalization
4.2 Site to Site Security Plan
4.2.1 Identify inter-site communications requirements
4.2.2 Security device RFQ Process
4.2.3 Security device purchases
4.2.4 Security device installation
4.3 Site to Site Voice Plan Finalization
4.3.1 Confirm voice plan
4.3.2 Voice gear RFQ Process
4.3.3 Voice gear purchases
4.3.4 Voice gear installation
4.4 Site to site Design Plan Finalization
4.4.1 Site to site RFQ Process
4.4.2 Site to site gear purchases
4.4.3 Site to site gear installation
4.5 Service Provider CPE Installation
5.0 Communication Backup Plan
5.1 Identify Communication Failure Points
5.1.1 Isolate communication failure points
5.2 Identify Communication Backup Paths
5.2.1 Communication backup plan implementation
5.3 Create Communications Failover Process
5.3.1 Comms failure process draft
5.3.2 Comms failure process editing
5.3.3 Comms failure process final submission
6.0 Network Security Design
6.1 Internet Border Security Configuration
6.1.1 Implement Security/Device Configuration on Internet Border devices
6.1.2 Confirm configuration/test
6.1.2 Configure Remote Access firewalls
6.2 Branch Site/Sales Office Security Configuration
6.2.1 Branch Site/Sales Office termination device configuration
6.2.3 Confirm configuration/test
6.3 Site to Site Security Configuration
6.3.1 Implement site to site VPN and communication configuration
6.3.2 Confirm configuration/test
6.4 Datacenter Security Configuration
6.4.1 Implement and configure datacenter hardware
6.4.2 Implement security design
6.4.3 Confirm configuration/test
6.5 HQ Office & Manufacturing Security Configuration
6.5.1 HQ Office device configuration
6.5.2 HQ Office security design implementation
6.5.3 Confirm HQ configuration/test
6.5.4 Manufacturing device configuration
6.5.5 Manufacturing security implementation
6.5.6 Confirm Manufacturing configuration/test
7.0 DHCP Design
7.1 Identify Dynamically Addressable Network Segments
7.1.1 Allocate address ranges
7.2 Identify DHCP Server Locations
7.3 Identify DHCP Options
7.3.1 Identify Default Gateways
7.3.2 Identify DNS Servers
7.3.3 Identify WLC's
7.3.4 Identify DNS Servers
7.3.5 identify Windows KMS Activation Servers
7.4 Setup and Configure DHCP Servers
7.4.1 Configure Datacenter DHCP Servers
7.4.2 Configure HQ/Manufacturing DHCP Servers
7.4.3 Configure Branch/Sales office DHCP Servers
8.0 Active
Directory
8.1 Setup HQ/Datacenter Domain Controllers
8.1.1 Virtual Machine Setup
8.1.2 Domain Setup
8.1.3 Domain Controller Configuration
8.1.4 Certificate Authority Setup
8.1.5 Domain testing
8.2 Setup HQ/Datacenter Servers
8.2.1 Virtual Machine setup
8.2.2 File Server Setup
8.2.3 DFS Setup/Configuration
8.2.4 DFS Testing
8.2.5 Share Testing
8.2.9 SQL/Database server setup
8.2.11 SQL/Database Server testing
8.3 Setup Branch Office Servers (File, DC, DNS)
8.3.1 Setup Branch Office Servers
8.3.2 Configure AD Services
8.3.3 Configure File Services
8.3.4 Configure DNS Services
8.4 Test and Verify Domain Functionality
8.4.1 Confirm Domain replication
8.4.2 Confirm Email Functionality
8.4.3 Confirm DFS Replication
8.4.4 Confirm SQL Server functionality
9.0 DMZ Configuration
9.1 Configure Spam and Email Servers
9.2 Configure External DNS Servers
9.3 Configure Remote Access Appliances
9.4 Configure DMZ Network Security
9.5 Configure External Web Servers
10.0 HQ Servers
10.1 Configure HQ TS Servers
10.1.1 Setup Virtual Machines
10.1.2 Install necessary applications
10.1.3 Test applications
10.2 Configure HQ Database Servers
10.2.1 Setup Virtual Machines
10.2.2 Install SQL Server
10.2.3 Confirm database operation
10.3 Configure HQ Voice Servers and Switches
10.3.1 Setup Virtual Machine
10.3.2 Setup Voice switches and T1 lines
10.3.3 Configure Voice infrastructure
10.3.4 Confirm Voice network functionality
10.4 Test and Verify Server Functionality
10.4.1 Test HQ to Datacenter functionality
10.4.2 Perform final baseline testing
11.0 Faculty Demonstration and Final Report
11.1 Faculty Demo
11.1.1 Faculty Demo Presentation Prep
11.1.2 Faculty Demo Presentation Finalization
11.1.3 Faculty Demo
11.2 Final Report Draft
11.3 Final Report Submission
Falcon Industries requires a large amount of device on the network, with the ability to easily expand
those numbers as they see fit. Because of this requirement we will be using the 172.16.0.0/12 RFC1918
private address range for FI’s internal network. Due to Interop recently returning their /8 address range
we will be requesting a portion of that range for FI’s public IP range. To be specific we will be
requesting 45.0.0.0/25, which will give FI 126 (two addresses are lost for broadcast and network)
available addresses for public access. We’re allocating such a large range to ensure that they have
enough addresses to allow for a NAT overload pool of two addresses, and the various static NAT entries
they will require. The exact IP allocation and VLAN scheme has yet to be confirmed by the team but will
be of top priority within the coming days.
Network Design
Routing
A large amount of routing will be required in the datacenter core, internet edge and between the
branch office/sales offices. In keeping with the scalable requirements of the network OSPF will be used
within the datacenter core and out to each branch site/sales office. Because of the largely remote
nature of FI’s employees a little more control will be required at their network edge over routing. This
requirement is satisfied by using BGP at the network edge between the border routers and ISP routers.
Static routing will be used between the ISP and FI, and will be further used by the internet edge Cisco
ASA’s. The “inside” interfaces of the ASA’s will be taking part in OSPF and will be redistributing the
default route into OSPF. OSPF will also be extended over the WAN to the branch offices, and sales
offices; however they will only be configured as stub-area networks. All routing protocols will be
configured using authentication with timed key chains, rotating keys on a daily basis. Please see the
following list for a breakdown of the network areas and the necessary routing configuration
Internet Edge
Internet Edge Border Routers
- BGP accepting a partial internet routing table while sending out FI’s prefix’s
- BGP configured using authentication with a keychain using rotating keys
- GLBP will be configured on the inside interfaces of the BR’s, and interface tracking will be used
- Will consist of 2 Cisco 7206VXR Routers with a PA-G2 network engine, and 4 PA-GE 1Gbps fiber
modules
- An interface on each BR will be used to feed into the remote access firewalls, which will be
assigned public IP’s on the external interface
- Will consist of 2 Cisco 7206VXR Routers with a PA-G2 network engine, and 4 PA-GE 1Gbps fiber
modules
- Will be used primarily to aggregate ISP connections into the routers, and pass traffic to the
remote office firewalls
- Will use OSPF (area 0) over the WAN, using authentication with a keychain using rotating keys
- Default routes will be used between the BR router’s GLBP virtual interface – this will ensure that
traffic can always be forwarded even if a router should go down, and will also load balance it in
the mean time
- “Inside” interface will aggregate several inside VLANs for things such as the firewall failover
VLAN, DMZ, etc and will participate in OSPF – Area 0
- 2 Cisco ASA 5550 firewalls will be used
- Will establish and aggregate site to site VPN tunnels between the sales offices and the HQ that
are delivered over the public internet
- Will establish and aggregate site to site VPN tunnels between the branch offices and sales offices
- Inside and outside interfaces will take part in OSPF – Area 0
- Access between sites will be controlled at this point using access control lists
- 2 Cisco ASA 5550 firewalls will be used
Switching
Based on the designs proposed in the network design stages of the project, we will be going forward
with a design that consists of a mixture of Cisco Catalyst 6500, 4500, 3750, 3560, and 2960 switches
throughout the company. By utilizing the 3750 series and higher switches we are able to take advantage
of many of Cisco’s redundancy features that are built into the switches, ensuring optimal uptime within
the network. The idea behind the design is that every device and path is expected to have a duplicate
device on the network to provide a backup path/service.
The HQ Datacenter core and office/manufacturing distribution and access layers are going to be built
using a combination of Cisco 6509-E, 4506-E, 3750X and 2960 series switches and various line cards in
the chassis switches. The sections below will outline the main configuration and technology points of
each building and network layer.
Layer 2 Quality of Service will be applied at these levels to ensure timely delivery of voice and video
traffic up to the distribution layer. The switches will also be providing Power over Ethernet Plus, which
provides up to 30 Watts of power to end user devices and wireless access points, which will be put in
place at both the HQ office and manufacturing buildings. The wireless access points that were chosen
for FI’s wireless deployment are the Cisco 1142N WAP’s, which provide dual radios allowing for use of
802.11N’s Multiple-Input, Multiple-Output (MIMO) technology providing up to 300Mbps of throughput.
The distribution layer in the HQ office and manufacturing building will consist of Cisco 3750G-12S-E all-
fiber switches. The all fiber switches were chosen as they are simply used to aggregate the uplinks of
the access layer switches, and control routing/security between subnets at the office/building level. The
3750’s also make use of Cisco’s StackWise technology, which performs exactly the same function as the
FlexStack technology available on the 2960’s. Because of this we will again be able to combine multiple
switches into one allowing us to have dual stacks of redundant switches with uplinks into the datacenter
distribution layer switches.
The entire network team will be assigned to configuring these devices and their configuration will
consist of the following technologies/protocols:
Service Providers
High speed connections are required between HQ and all regional sites, for increased productivity, and
for offsite backups. It has been decided that Rogers and Hydro One will be used for all Eastern and
Maritime connectivity, while TELUS and Bell will be used on the west coast.
All service providers will be required to offer FI one single Layer 2 100Meg ethernet circuit, with a
guaranteed SLA of 95.9999% uptime. The service providers will also be providing Layer 2 Class of Service
over the WAN between sites, which we will then overlay with Layer 3 QoS.
Going back to FI’s largely remote staff, and web-presence, Rogers and Hydro One will be recruited to
deliver a 400Mbps internet circuit over the same CPE device that the L2 circuit is being delivered over.
The circuit will be required to support bandwidth increases as low as 10Mbps up to 100Mbps as needed,
in 10Mbps increments. The TELUS and Hydro One lines will also be used to propagate data to the
backup datacenter in Vancouver, BC. This will be done every 24 hours, at midnight, to avoid bandwidth
stress on the lines, as well as helping to keep speeds optimal. This will also allow Falcon Industries to
keep themselves up to date, should anything happen to their Mississauga based facility.
The backup datacenter in Vancouver will provide an exact replica of the Mississauga datacenter but will
not require Falcon Industries to make any upfront capital purchases, and will all be maintained by the
datacenter owners
Branch & Sales Offices
FI’s Branch offices will employ approximately 35 people (with the Toronto office being one of the local
offsite backup storage locations). In keeping with the demand for redundancy, all branch offices will be
connected together with the Layer 2 100meg services provided by the providers listed above. However
because of the low staff numbers and remoteness of the sales offices, they will be connecting into the
HQ over the internet with each sales office having one DSL and one cable connection. The branch offices
will be making use of dual Cisco 3945 ISR routers to terminate the L2 circuits and site to site VPN
tunnels, as well as 2 Cisco Catalyst 3560X-48P-S switches for LAN connectivity and routing. The branch
offices will also be equipped 3 Cisco 1142N WAP’s controlled by the central controllers at HQ.
The sales offices will be equipped with two Cisco 2921 ISR’s with 2 ADSL cards in each, and a single 48
port switch module with PoE for each router. The list below outlines the high points of the branch and
sales office gear.
Branch Offices
- The branch office routers will be running OSPF as a stub area – areas 5 – 9
- A default route on the router from OSPF will direct all traffic over the WAN
- The LAN switch will be configured with floating static routes pointing traffic to each router
- The LAN switch will be configured using sticky MAC address filtering to prevent
unauthorized access to the network
- No ACL’s will be applied to the network at each branch site, other than to restrict device
management access
Sales Offices
Wireless
Wireless is something that the customer has requested and to meet that request we will be leveraging
the 1142 AP’s at each site, along with two Cisco 4402 WLC’. The wireless networks will be available for
all corporate devices, and one single public VLAN will be available at every wifi-enabled location. WPA2
in conjunction with EAP-TLS will be the method of security for corporate wireless access, with WPA2
using a static password and captive portal for the public wireless. The list below highlights the
configuration of the WLC’s and WAP’s.
Wireless Configuration
The voice network will be configured in a manner that doesn’t require the voice switches to be 100%
operational at all times. Once calls have been established through the phones they are then handed off
to the handsets themselves to manage the calls, so if a voice switch goes down, there are 4 other at the
HQ and one at each branch site to take over.
Shoretel also has a mobile application that we will install on all of the corporate Blackberry’s in order to
extend the local corporate voice network out to the mobile sales people and mobile administrative staff.
Along with the Blackberry’s we will be installing Blackberry Enterprise Server to provide secure
corporate email and document access to the mobile staff of Falcon Industries.
The mobile staff laptops will be outfitted with 3G wireless cards to allow them to remotely access the
Juniper SA 4500 VPN appliances from wherever they are, as that will be their primary means of
communication. The Juniper will deploy VPN clients that will launch at the start-up of each machine
making the software think they are on site and ensures that the appropriate security settings are still
being pushed out to the users no matter where they are.
Network security will be enforced across the network, restricting access between departments and
servers as well as to network management interfaces and DMZ servers. Further security requirements
will be enforced using Active Directory Group Policies, Security Groups and domain trusts.
Virtualization
As huge supporters of virtualization we will be leveraging our knowledge of it to install a top of the line
virtualization environment, using Vmware vSphere server along with vCenter to provide a highly
available server infrastructure, based on 8 Dell Poweredge R810 servers. In keeping with the redundant
environment all virtual storage will be provided using four IBM DS3400 SANs packed with 6TB worth of
space, connected together with four Brocade Fiber Channel switches.
The storage/virtual infrastructure will be built with HA in mind. We will be configuring vCenter to make
use of vMotion, allowing virtual machines to migrate themselves between physical hosts seamlessly, and
automatically, resulting in almost zero downtime for servers. The highlights of the virtual infrastructure
are listed below.
Virtualization Configuration
- Dell servers will be configured with dual socket motherboards with 24GB of RAM or more
- Minimal local storage is allocated to the Dell servers, just enough for the vSphere
installation
- The SANs will be partitioned and RAID’ed using two RAID 10 logical drives, along with a
single RAID 5 logical drive on each SAN providing a high performing RAID drive for regular
application access, and one for high speed database access
- The FC switches will be configured to allow all servers to be in the same zone to ensure
access to the virtual machines from every server
- Cisco Nexxus 1000V virtual switches will be used on every physical host to provide enhanced
network security between the local VM’s
Local software will consist of nothing more than anti-virus software and productivity software on each
user PC, as 90% of the remaining software will be delivered via the web, or terminal services.
Maintenance Staff and IT staff will be provided with help desk software on central servers to manage
user help requests