Cisco Switching/Routing :: M1000E - Small Datacenter Design
Apr 19, 2012
Small datacenter design. My requirements and setup will be as follows Dell PowerEdge M1000E Blade Chassis (initially one full chassis)Dell Powerconnect 10GbE Blade SwitchesDell Compellent Storage Array 10Gb iSCSI with redundant controllersDell Powerconnect 7024 dedicated external storage Virtual host blade servers 2 x Cisco ASA for firewall (5525-X or similar in active-active configuration)2 x redundant routers or switches as gateway to public internet I am looking to be able to segregate customers (approximately 100) into seperate VLANs at the access layer and route them up to the Cisco ASA firewalls using Dot1Q trunking for segregation. The Cisco ASA's will perform NAT functionality and route to the redundant gateways. I then need to police each customers traffic at the gateway to limit bandwidth and perform specific traffic marking along with simply routing out to the internet.
Budget is somewhat restrictive so I am looking for the most "cost effective" devices I can use at the gateway to perform the traffic policing/marking/routing for each customer.
we are planning a Nexus datacenter project with this layout:Our experiences with Nexus switches are not so large until now and the manuals are very extensive.Both N5K´s should be connected directly with all 4 N2K switches. I did not find a layout like this in the manuals. Only a design,where only 2 N2K are connected to one N5K, with this fex config:Now I´m not sure if it is right to make a config like this with the same slots and fex´s or with different slots and fex´s.
We have microsoft servers and other application servers (around 12 in nos) which should have gig connections to the access switch. In turn this access switch will be connected to our distribution switch 4503. Which model of access switch best fits from the below 3 models. It should be cost effective as well.
I have 2 6504's running HSRP as my core. They are each etherchannel'd to my Datacenter switch (3750 Stack) -- see image below.The problem i a having is with the etherchannel status:
Core1 PO11 status w (waiting) Core2 PO11 status P(bundled)
DC11 PO48 status P -- but only to Core2 - the interfaces to Core1 are suspended. (See attached configuration documents) None of the devices have any information in the logs. I run this same configuration in my central location, but we are running Nexus 7000's. With the 6500's, do I need to split the port channels on the 3750 to allow them to negotiate the etherchannel? If I split the portchannels, are there any concerns? Should I expect to see the etherchannel status as P (Bundled) or H (Hot-Standby)?
We have two catalyst 6506 switches with 10 gb u plinks and around 120 edge switches cat 3750-x switches. Still the module on the core wheere servers are connected is 1000mbps port.Now if we induct a nexus switch to the datacenter what kinds of benefits we can reap In a virtulised environment as well as real environment?following are the some of the queries.Can we reduce the number of edge switches? ( by virtual environment), Inter operabaility between cat ios and nexus ios, how this will affect the environement,What will be the over all benefits ?, What are the cons of this induction ?
We have 2 sites, each with 2 x 4506 switches which will be connected togther using an etherchannel. The switches will provide access ports for client devices and will be configured with HSRP to provide gateway redundancy. SW1 will be HSRP active.2 metro ethernet links will be installed in each site which will connect back to our HQ sites. OSPF will be used over the backbone to provide resiliency and to allow shortest path routing to each HQ and to prevent traffic over the HQ to HQ link.
The 4506 will be trunked togther with an SVI for providing OSFP adjacency.For the traffic flow from SW2 to HQ2, traffic will hit SW1 and then route back to SW2 and then to HQ2. Is this the best way to do this? Should a second link be connected between switches just for routing or should something like GLBP be used?
I have a typical LAN environment that spans across a large warehouse. I have done a lot of redesigning of the environment to satisfy the need for a disaster recover plan. I now have created a LAN with multiple v lans and must also connect all the access layer switches back to the core switch where the servers are.
I was thinking of something simple such as Port channel of 2 Gbps across the backbone and simple floating static routes . I have then moved my wan access link to a 3750 and implemented routing a CEF at each of the 3 core switches (blue). My question is more of design.
We have remote office where we have 2921 router with 6 layer 2 switches. We have few servers which need to be in specific vlan.
2921 router does not have switching engine we are using this to support VOIP.
So on 2921 router i created 6 sub interfaces for each vlan and assign them to their specfic vlans. Then I have trunk connection to switch 1. Now switch 1 connects to all other switches in the network. As our company design all layer 2 switches should be transparent mode. i tested them i can ping from one switch to all other switches.
Router vtp mode i set to transparent mode and from all switches i can ping the router sub interfaces.
if the above design is acceptable how does the routers know which one is active and which one is standby ? if we need a direct connection between two routers they have to be on a seperate subnet and routers dont allow broadcasts - so how will hsrp work on routers ?
We are designing a LAN Network for ourselves.The proposed design is as follows:
4 x 2960S switches in a Stack Access-Stack-I 4 x 2960S-PoE switches in a second Stack Access-Stack-II
2 x 3750X switches in a Stack Core-Stack
Now I would like to connect it in the following manner ?First,I would like to use EtherChannel using the 10Gig LinksSecondly, I would like to use Cross-Stack EtherChanel too.I have given a graphical illustration of the connectivity Now my Qs: a) Will the 2960S supports EtherChannel using the 10G links and the 3750X too... b) Does the proposed solution will work... or It will have any problems.
QoS design problem that I have. I have a client that is deploying new 4507 series switches with SUP6Es. The client will be running lots of voice, streaming video, and video conferencing over the LAN and want to base QoS on Cisco Media net recommendations.
I need to design a new QoS policy with focus on the above media services with basic queuing for critical data services. I have read the Media net design guide and the suggested 12-class model will be too complex to start with but I have seen references to start with a 8-class model with the ability to easily migrate to 12-class in the future. The 8-class model meets all of our requirements but I need to understand how this will work with the 4507 queuing model? [URL]
I've been tasked to come up with a design to segment our internal network to reduce broadcast domain size. In addition, we are running out of DHCP available DHCP addresses. I need to have a solution that will give me more available IP's, but reduce our broadcast domain.
We are Cisco VoIP shop. Our current environment consists of dual 6509 chassis in a VSS config. We have 10 access switches that are model 3750's. Each 3750 has dual 1Gb fiber links to the VSS Core in an etherchannel configuration. We have 2 VLANS (data and voice) that spread throughout every switch. Both VLAN's have their own DHCP scope.
Our current broadcast domain is a 255.255.248.0, so we have over 2000 potential broadcast devices. Cisco recommends not having larger than 512. So my research has brought me to a design as follows:
MY DESIGN: > Have individual voice and data VLANs for each closet switch. > We have 10 closet switches so this would require 20 new vlans > With every separate VLAN we would need a different DHCP scope. > Configure 20 new DHCP scopes for the 20 new VLANs. > Each DHCP scope would have a 512 available addresses. > Enable IP Routing and configure EIGRP on the VSS Core and 3750's. > I'm tossing around the idea of have each 3750 be an EIGRP Stub. Not sure yet.
QUESTIONS: 1. How to verify what I described in my design? 2. Any alternative solution that might be less complicated than configuring Layer 3 on all my access switches? 3. Any thoughts on configuring EIGRP Stub vs. having the VSS Core do all the work? 4: Any template that I could base my 3750 config from?
I'm looking for feedback and constructive criticism on our network redesign project for our company.We are currently on a 192.168.1.x/24 and running out of addresses. We are looking to move to the following design and implement VLANs as well for segregation and security. We are probably going to use a few SG300s for switches. [code]
On occasion employees are downloading large files for business purposes, at very fast speeds. This has the potential to overwhelming our Internet circuits which causes our Customers problems accessing our Web Hosting services.
Our network is comprised mostly of 2960S switches for the employees. Webservers are connected to other 2960(nonS) switches and directly into the 6509 VSS.
Customer’s traffic comes in through one pair of ASA’s. Employee’s traffic is handled by another pair of ASA’s.
Employee traffic flows from the 2960’s, past an L3 SVI on the 6509, then through the Employee ASA’s, then to the ASR’s, then out to the ISP#1 or ISP#2
Web Server traffic flows from the 2960’s or 6509, to the Customer ASA, then to the ASR’s then out to ISP#1 or ISP#2. Web server traffic does not flow through an L3 SVI.
The goal is to allow employees the ability to have the most bandwidth they can, however customer traffic always has to be preferred in the event of a ISP circuit approaching its limit.
This past networkers I was at the Cisco booth discussing how the 2248 can connect to the 5548 and have server connectivity. It was told to me that now, as of a fairly recent NX-OS release, you can have the 2248 going dual-homed to both 5548 via VPC and then have a server connected to both 2248 and be in active-active mode. Is this correct?
When we first deployed our 5548 and 2248 we had to put the 2248 in a straight pin mode, where it only had connections to one 5548 and then the server would dual connect to the 2248's and be in active-active mode. I was told that this changed with an NX-OS release however documentation still seems to be fragmented on what exactly is the case.
I have been recently asked to design a network. What I have for equipment is four 2960G's and one 1941 router. One switch is a root switch and the other three will have end devices on them.I have decided on three V lans to go with: VLAN20 Data, VLAN30 ISCSI, and VLAN99 Management each with seperate trunk links and redundancy (see picture below).
I have a seperate trunks for each V lan using the switch port trunk allowed. With exception to the Data V lan.My design has the Data V lan as the native because it is going to be receiving untagged traffic from the external network. I have set up inter v lan routing on the 1941 via sub-interfaces to allow them to talk to each other (or because of allowed they cannot?). I have one port coming from my router to my switch via Ethernet cable which is my bridge out. I have my external port doing a NAT translation for my inside addresses and a Default route set up ip route 0.0.0.0 0.0.0.0 gig0/0. I am using rapid- PVST to prevent loops and provide my zero downtime convergence when a link goes down. As it stands right now I cannot talk out of my network or inside of my network.
You can see it is highly redundant and I do not want to change it. This network is going to be deployed but there will never be anybody physically there to manage it which is why I made it as redundant as humanly possible.
I would like to do the following architecture with the same C3750 : network X,Y,Z connected to 3750 in VRF D the 3750 uses a routed interface on subnet E for the default route in VRF D on this routed interface a BYPASS EQUIPMENT the other BYPASS EQUIPMENT interface is connected also to another routed interface on subnet E "also" this routed interface is in another VRF C with other network A and B.do you know if it will work because of 2 routed interfaces on the same IP subnet or is there a way to do that ? the only goal for me is to catch traffic from network X,Y,Z on SYN and ACK.
We have our network setup as displayed in the attached. We have 2 HQ offices and 1 branch office. The branch office needs to connect to resources located at both HQs but taking the most effecient path. We have ethernet circuits connecting from each HQ to 2 x Cisco 3560 switches in the branch. HSRP has been configured on the 3560 switches with SW1 as active and SW2 as standby. OSFP has been configured in a single area 0 and the path cost on the link between HQs has been increase to allow 3560 SW1 to route to HQ1 directly and HQ2 via 3560 SW2.The 3560s are connected with a trunk with a L3 SVI for OSPF. This seems to work ok but I have noticed that the branch could become transit if the HQ1 to HQ2 link breaks. How can this be avoided? I realise that if we configure the branch subnets and SW1 to SW2 link in a stub area (area1) then all traffic will route from SW1 to HQ1 and will never share over SW2. I'm assuming that this is because OSPF chooses inter-area routes over intra-area.
remote location on MPLS circuit terminated on a Cisco router that has Internet connectivity through Central Site router. We are installing a cable modem at the remote location that is to be used as the Primary Internet Connection but still be able to use Internet through MPLS if the cable Internet goes down. We want the failover/fallback to be handled automatically.
We have an ASA5505 for the cable Internet which then feeds into the ISPs modem.
At first I was thinking about getting a module for the remote router so the cable Internet could be terminated on the remote router as well but that introduces a single point of failure. I would also like to firewall both the MPLS and the cable Internet but if I do so on the ASA there is another single point of failure.
I'm working designing a switch system for our core/data center.
We have 5 esx hosts, 2 sans with 3 nodes each. We have voice servers, a couple of routers and a few odds and ends. There are 7 other locations aggregating into this data center via 1-2gbps fiber connections. The bandwidth usage on these links is minimal, but there is a total of about 3000 devices aggregating into the system. My main concern right now is the 3560G's are seeing many output drops, due to the small buffer size on those switches. I have been looking at couple of options to resolve this issue, including the 4948E, 4507E, and 3750X switches.
Budget being the biggest factor, I am finding that the 4507 might be out of the price range. So I was leaning towards the 4948E switches for connecting the servers and iscsi san's as the 3750X is not recommended for iscsi. Redundancy is important so I would like to have two. The second concern is that I need to aggregate the fiber connections and for that I was looking at the ME-3600X or possibly the WS-C3750X-12S-E. I'm running eigrp, so this switch would need to have full routing, as it would also serve as the core switch for the 4948E's.
So in the end I was thinking that two 4948E switches up linked to the ME-3600X which would do full routing for the fiber aggregation and any routing needed for the servers and sans.
Servers and Sans_________4948E________ME-3600X_________7 fiber connections |____________4948E_____________|
I would look at a second ME-3600X in the future for redundancy. This is the lowest cost biggest buffer solution that I could find.
I am just browsing and looking for a solution to converge my multi-vendor switched network and bring some redundancy to it as recently we managed to get a redundant links. I have a need to change core switch to Cat3750G, which has Per-V LAN-RSTP+ on board, but tests have shown that it won't be compatible with some other proprietary per-V LAN RSTP solution other vendor's switches use currently.
So, I thought maybe standard-based MSTP design might do the trick. I've made some tests and got some weird and unstable switching result. I have two topology rings with a core switch in the center. Every ring has about 10 switches, so practically network diameter may vary from 5 switches (when spanning-tree converges in the center and I have a blocking port somewhere int the middle of the ring) to about 10-11 switches (if a I have link failure on any of ports right at the core switch). I disconnected one port from core switch to eliminate a possible switching loop while I will be configuring new MSTP design. Then I started enabling MSTP on all the switches staring from core Cat3750G to MSTP, one by one, placing all switches to the same MSTP region, and placing all V LAN's to default MSTI0(CIST) cause I don't need to organize any separate MSTP instances for every V LAN or for group of V LAN s. When I turned MSTP on on 7th or 8th switch in the chain (cause I had a physical chain when I disconnected one port out of redundant ring) I got all switches "flapping", storming and flooding the network with broadcasts. Even when I had one redundant port disabled.
I have no idea what I am doing wrong. I noticed that Cat3750G has an option that defines a possible network diameter which actually automatically changes some hello, max age etc. attributes according to diameter specified. When I defined a maximum network diameter of 7, if didn't change anything: I still have hello timer of 2 sec etc. I've been wondering if the maximum network diameter has something more than just a "variable" to fine tune hello timers etc? Maybe I won't be able to use MSTP in my network which might have diameter more that 7 switches. Or maybe it was a mistake of placing all the switches to the same region and all the v LAN s to the default MSTI0 (CIST) and I should configure one MSTI per V LAN or per some group of V LANs and subdivide my switches to few MSTP regions?
We are about to install a new network consisting of Cat 4500s with Sup7E at the Access Layer, with Nexus 7000 at the Distribution and Core layers. We have 14 floors with at least three 4500s on each floor. Within the office block where the Access Layer and Distribution Layer reside we need to support secure borderless networking using 802.1x to place users from different parts of the business into segregated networks at layer 3.All switches will have the feature sets to support MPLS/ VRF / OSPF / EIGRP / BGP etc.We quickly dismissed the idea of using VRF-Lite due to the sheer number of Vlans we would need to managage and maintain, the point to point links alone just to get one additional VRF on each floor required far too many Vlans.As a result we are now considering deploying MPLS. The obvious benefits include scalability and manageability, the fact that all switch to switch links can now be routed, instead of having to using SVIs.
I have two cabinets in a datacentre (with 12 CAT5 links available between them), at the moment I just have a single firewall and a very basic 3Com 2824 unmanaged gigabit switch in each cabinet that are connected together. This works perfectly as the traffic use is very light and is typically server->firewall->Internet rather than too much heavy inter-server traffic.I want to improve the redundancy and reliability however. So I plan to get two Fortigate firewalls, put them in a HA cluster and have one in each cabinet with connections to the WAN. The servers are all VMware so have mulitple NICs teamed. The last bit then is between the VMware host servers and the firewalls - the switches. I'd like to have each server connected to two switches to give multiple paths, so I'm looking at two switches per cabinet. From doing a fair bit of reading it looks like I'll have no problem with this, STP should be able to sort out the multiple routes to whichever firewall is the active member at the time. There will be some need for basic VLANing as I would like to separate management traffic and certain servers I wouldn't expect to exceed 5-10 VLANs.
As I said the traffic is very light and from what I can tell I don't require any "fancy" features, and given the I need to buy four switches I'm trying choose a switch that is reliable but will do the job and not much more. If we experience growth down the road then we can buy more expensive switches then. So I've been looking at the WS-C2960-48TT-S and the WS-C2960-48TT-L, the first using the LAN Lite software while the second uses the LAN Base. The LAN Base version is virtually twice the price so I'm wondering if there are any features of LAN Base that are required in my scenario? I've done quite a bit of reading but cannot really see a reason why LAN Lite should not work fine, but don't want to discover I missed something when it is too late .
i have a cisco router 887 which i am trying to configure. but however, do i really need to use the SDM utility or i can do it through CLI?i need to replace my current router in my small home office.
I need to extract the serial numbers of SFPs which are plugged in a SG-200-18.For information, SG-200 doesn't have a CLI ... only a Web GUI.I just find this information : [URL]
i have major problem with two new Small Business 300 Series switches.Everytime i try to save the running config i get a GUI error message: "Another copy process is active, please try again later."It's also not possible to re-flash the firmware because the GUI stops responding.I have also tried to do this via console access and this produces a "the copy utility is occupied by another user" error message.-> so this is not a browser based problem.My first thought was that the switch (SG 300-28) is faulty so i unpacked the next new one (SF 300)and got the same error messages!Then i had a 2 hour webex support session with the Cisco Small Business Support and they did not found a reason for this behavior.Both switches are working normally, you can configure them, but after a reboot they are back to factory default again There is no possibilty for copy running config to startup config and it's also not possible to flash the firmware.(Web GUI & Console). tell me if this is a fundamental problem of the 300 Series?
My problem is that I have a Cisco 300 series small business switch with multiple VLANS each one with an IP address and two or three ports assigned to each VLAN. I have an E3200 wireless router that I want to use to use to share internet on the switch. All of the VLANs are reachable from the other VLANs and I've put a static route on the E3200 so that I can reach the VLANs from a machine connected only to the router. But I can't reach machines on the otherside of the router or get to the internet from the switch.
I would like to make a design with 4 Nexus 5596UP. 2 of them equipped with Layer 3 Expansion Module so they can serve as core layer and the other 2 Nexus used as Layer 2 for aggregation server layer.The 2 Nexus in the core layer will run HSRP and will peer with ISP via BGP for Internet connection The 2 Nexus in the aggregation layer will be configured as layer 2 device and have FEX and switches connected to them.What I am ensure of is how the vpc and port-channel configuration should look like between the 4 nexus. What I was thinking is to run vpc between the 2 Nexus in the aggregation layer and between the 2 Nexus in the core layer. Than I was thinking of connecting each Nexus in the aggragtion layer to both Nexus in the core layer using port-channel and vice-versa.
We are currently designing a complete Layer 3 to the edge solution for our customers. The network design is a combination of a collapsed core (Core to access) as well as a three layer model (Core/Distro/Access) for connectivity to the Data Centre, Internet and Wireless Blocks.
The core of the network contains two 6509E switches interconnected on a Layer 3 Port channel (no VSS). Access Layer switches (3750 Stacks) connect to the core switches over p2p routed links (Collapsed core part of the design). Distribution layer switches provide connectivity to the Data centre, Internet and Wireless Blocks.(three layer model.
All IP addressing is being planned for assignment from the private RFC 1918 address block(10.0.0.0/8) for both Infrastructure and Access layer VLANs for users.
I am carving up an internet Class C for customer. This class C is used by 3 distinct QA, Corporate and Production firewalls. I want to carve up IP space so there is a /26 for each environment. The issue I have is the firewalls may need communication with each other via the public IP space. Currently I don’t have any L3 switches in between the firewalls and the edge internet router. So with subnetting, it would seem I need to push everything through the internet router for the intra-firewall communication.I would rather not push this traffic through the edge router, so I came up with an idea to allocate all firewall outside interface IP’s in the 4th (last remaining) /26. That way, I can allow firewalls to communicate over the primary interface IP’s, which will all be in the same subnet – without going through a routing “engine”/device.
For the actual environment subnets (NAT's on respective firewalls), I create a static route on the edge router pointing to each of the firewall’s primary IP’s for the respective environment routes (the first 3 - /26’s).This is still a beta design, but I have done this before on small scale when ISP gave me 2 subnets for example, assuming I was going to put a router in between the customer firewall and ISP. I would use the “routed subnet” on the ASA interface, and then pull the NAT’s from the other subnet. The ISP would have to add a static route directing the NAT subnet to the “routed subnet” correct IP - which would be the firewall outside interface primary IP.I recently found out that with ASA OS 8.4.3 and up, ASA will not proxy arp for IP’s not in its local interface subnet. This means the ISP/router will have to assign static ARP entries on the edge router. This can get messy after the first few NAT entries. So I am debating the design now. I think this kind of stuff going forward won’t be worthwhile with newer ASA 8.4.3 code.
How to communicate between different ASA’s, while still carving up the Class C into usable smaller subnets? The primary reason for doing this in the first place is to support routing on the edge router. I am thinking it might be time to ask for another Class C to do the routing functions, and keep the firewalls all at Layer 2 in one /24 - Class C?
I'm looking for Routing Design scenarios to complete our configuration needs for remote branches. We will have two 1921 routers in each location, one with a T1 from our MPLS carrier, the other with a DSL connection from an ISP. The T1 router will have an assigned AS and use BGP to router back to head quarters. The DSL router will have an IPSec tunnel back to an ASA 5510 at head quarters. I envisions a GRE tunnel from the DSL router back to head end routers connecting to MPLS at head quarters. Not sure yet how to manipuate the routing between head quarters and the branches such that the T1 router is the primary route to and from the branches and the DSL router is for failover/backup.
Is GET VPN be a better choice than DMVPN in order to support VoIP, Video over IP, Advanced QoS and Multicast? I think it should be the better choice based on what is described as the benefits and how it works but I just want an expert opinion.
Can separate groups be created using the same key serves? I need to protect two functionally separate WAN segments that terminate on the same DC core routers. However I want the separate WAN segments to have different encryption policies. Is this possible?
It is stated in the deployment guide for GET VPN that "Network Address Translation (NAT) is not supported by GETVPN. NAT must be performed before encryption or after decryption when GET is used." However the NAT capability is required on all the routers.
The 2900 series routers has embedded hardware encryption but according to the router perfomance guide, with a mix of traffic such as NAT, QoS and IPSec VPN they are unable to provide 100 mbps of throughput. Does the new ISM VPN modules would allow the routers to achieve 100 mbps of throughput with the services mentioned above?