Fredrik's CCNP thread
Comments
-
joetest Member Posts: 99 ■■□□□□□□□□Im using your notes to brush up on things. I'm done with all the Switch Simplified Chapters (QoS was a bit of a pain imo).
Am counting on reading your notes, my own(written more than 4k lines in danish) - I'm planning on doing the labs in the book, the labs in the Switch SLM and finally go through "How To Master CCNP Switch". 2-3 Weeks time if I cram it I think. Then hopefully I'll be ready for the exam.
Boy am I glad I got the CCNP Simplified series..
I think you're gonna do great, considering what you've written here! -
Danielh22185 Member Posts: 1,195 ■■■■□□□□□□I'm just trying to keep myself motivated since I don't find the switch exam nearly as engaging as route was. I've found that I'm able to study longer hours if I'm writing something at the same time, and I really want to finish in 2 months instead of 3 like last time.
I like the idea and I agree it works well. I generally try to make my notes after reading a chapter, highlighting areas and hand writing notes.Currently Studying: IE Stuff...kinda...for now...
My ultimate career goal: To climb to the top of the computer network industry food chain.
"Winning means you're willing to go longer, work harder, and give more than anyone else." - Vince Lombardi -
fredrikjj Member Posts: 879CCNP Switch Wireless
source: CCNP Switch Simplified chapter 9. Since wireless is such a small part of the exam I did no additional reading.
This chapter covers 802.11 wireless basics and how to integrate it with a wired network. There are some formatting issues here and there that I have no time to fix.
Some definitions:- Client or Station (STA) is a device that interfaces with the wireless medium and operates as an end-user device
- Access Point (AP) functions as a bridge between the wireless stations and the existing network backbone.
- An Independent Basic Service Set (IBSS) is an ad hoc wireless network where no distribution system is available. Essentially, this term describes clients that connect to one another without using an access points.
- A Basic Service Set (BSS) describes a single cell in the 802.11 infrastructure, defined by a single AP's coverage area. Instead of traffic flowing directly between devices like in the IBSS, it gets sent to an AP which then relays it down to the receiving host.
When a STA wants to access an existing BSS it needs to get synchronization info from the AP, using one of two methods: passive or active scanning. Passive scanning involves the STA waiting to receive a beacon frame from the AP – a periodically sent frame containing synch info. With active scanning, the STA attempts to locate the AP by sending out a probe request, and waiting for a probe response.
Once discovered, the AP might require the client to fulfill certain requirements to be able to join the network. This could be things like authentication, a matching SSID, or a supporting a certain 802.11 standard.- Extended Service Set (ESS) is basically several overlapping BSS connected together by some distribution system. This introduces roaming, the ability for a client to move seamlessly from one AP/cell/BSS to another.
- Devices in the same IBSS/BSS/ESS must use the same Service Set ID (SSID), a text string included in every frame.
- Distribution System (DS) is the backbone connecting several BSS and thus forming an ESS. Typically this would be a normal Ethernet network, but you could also use wireless to interconnect the APs.
The 802.11 Standard- In the OSI model, 802.11 sits next to 802.3 (Ethernet), and operates at the physical and the MAC sub layer, i.e. it's a L1 and L2 technology. Obviously, this means that interaction between Ethernet and 802.11 must involve some kind of encapsulation work by an intermediate device. The frame consists of a 32 byte MAC header, a variable length body (0 to 2312 bytes), and a 4 byte Frame Check Sequence (FCS).
- The MAC header consists of several fields:
- Frame Control field that is used to define the type of 802.11 MAC frame.
- Duration/ID field that indicates the remaining duration needed to receive the next frame; a collision avoidance mechanism
- Sequence Control field contains the sequence number of each frame, as well as the number of each fragmented frame sent.
- Address fields: Uses a 48 bit address, similar to MAC addresses in Ethernet. Source and destination address for the frame is here, and also some other addresses like receiver and transmitter.
- QoS field. Used for wireless QoS based on the 802.11e amendment.
- Collision Avoidance: CSMA/CA
- The premise behind this collision avoidance mechanism is that any station that wishes to transmit must first “sense” the medium and wait if it's busy; i.e., someone else is already transmitting. The main problem with this is that if two or more stations sense that the medium is free at roughly the same time and then send their traffic, collisions could still occur.
- Implementation of CSMA/CA for wireless uses the Distributed Coordination Function (DCF) to solve that potential problem of two devices sensing that the medium is free and transmitting at the same time. DCF defines a time interval called the DCF Interframe Space (DIFS), and if the medium is idle for this amount of time, a station is allowed to transmit. It also adds a random back off time to each station to prevent the issue of several stations sending at the same time.
- There's also a second coordination function called the Point Coordination Function (PCF) where the AP polls the clients about data transmissions and then sends them individual control frames when they are allowed to transmit. Apparently, this is an optional feature and rarely implemented.
The Hidden Node Problem- This happens when wireless stations/clients/nodes are within the range of the AP, but not in range of each other. They are then unable to sense the carrier and several stations might send packets to the AP at the same time. To overcome this issue, a station must request access from the AP with a RTS (Ready to Send) message before transmitting, and will only transmit if it gets a CTS (Clear to Send) message back.
802.11 Frame Types
There are three main types of frames that sent over a wireless network:- Control frames that are used to control access to the medium. The most common are the RTS and CTS and the acknowledgement (ACK). When a frame is received it's checked for errors and must then be acknowledged. If the sender doesn't receive the ACK, the frame must be resent.
- Management frames that are used to establish and maintain communications. The book mentions around 10 different types of frames here and I can't imagine that knowing all of these are within the scope of the exam. These frames are used for things like announcing the SSID of an AP and associating and authenticating with an AP.
- Data frames of different types that are used to transmit actual user data. These frames vary a bit depending on if they are used in an environment with PCF or DCF.
802.11 Standards- These standards define how 802.11 is implemented at the physical layer.
- 802.11 was released in 1997 and supported speeds of up to 2 Mbps operating in the 2.4-2.5 GHz range. Because it uses 2.4 GHz, it is susceptible to interference from other devices also using this free frequency range like cordless phones and microwave ovens.
- 802.11b also uses 2.4 GHz. Data rates are 1,2,5.5 or 11 Mbps. 14 channels, each 22 Mhz wide. Though 11 channels are supported in the US, there are only 3 non-overlapping channels available. It uses DSSS modulation.
- 802.11a supports up to 54 Mbps and operates at 5 GHz which makes the signal more easily absorbed by walls and other objects, but less susceptible to interference from other devices since those usually run at 2.4 GHz. Not compatible with 802.11b or g and therefore seldom used anymore.
- 802.11g supports up to 54 Mbps and is compatible with 802.11b because it uses 2.4 GHz and can operate at 802.11b bit rates. 54 Mbps in ideal conditions, but can also operate at 48,36,24,18,12,6 Mbps.
- 802.11n has a max throughput of 600 Mbit, but less in the real world. It uses several antennas to increase capacity, something referred to as MIMO (Multiple Input Multiple Output). It combines adjacent channels together for 40 Mhz operation. Frame aggregation at the MAC layer decreases overhead. Backwards compatible with 802.11 a/b/g.
Cisco Unified Wireless Network- This is a framework for enterprise wireless consisting of 5 different elements:
-Access Points and Wireless Bridges
-Network Unification
-Network Management
-Mobility Services
Each of these is described in a very unappealing marketing oriented tone in the book, and probably copy/pasted from some Cisco document. I'm not going into details.
The Cisco Wireless LAN Solution- An enterprise wireless solution consists of wireless LAN controllers and lightweight APs. A controller is basically what it sounds like; a centralized agent that controls the access points in some fashion. The point of this is of course to simplify management of larger deployments with hundreds, or maybe even thousands of access points. A controller can communicate with a LAP over an IP network which means that it doesn't have to be in the same broadcast domain.
- A protocol called LWAPP facilitates the communication between the lightweight AP (LAP) and the wireless LAN controller (WLC) by creating some kind of tunnel between the two devices. For this to happen the LAP must discovery the WLC. It cannot act independently; firmware (if needed) and configuration is downloaded directly from the controller. This allows you to, among other things, “zero touch” deploy new LAPs, i.e. install them without having to mess around with configuration and things like that. There is also an equivalent open standard protocol called CAPWAP. This architecture, where certain functions are stripped from the AP and placed in the controller instead is called “split MAC”.
- The discovery process consists of these steps:
1. The LAP requests and is issued an IP by a DHCP server.
2. The LAP sends and LWAPP discovery request message to the WLC. It can first try to do this with an L2 broadcast, or fall back to an L3 discovery mechanism. This step is repeated until at least one WLC is found and joined.
3. Any WLC that receives the LWAPP discovery request responds with the LWAPP discovery response.
4. If the LAP receives multiple responses the first one is typically used.
5. The LAP sends a LWAPP join request to the WLC. After validation, the WLC sends an LWAPP join response.
6. The LAP validates the WLC which completes the discovery and join process. As you would expect, the join process includes various authentication and encryption mechanisms. Once these steps are completed the LAP is registered with the controller and can begin accepting client association.- When a wireless client associates and authenticates with an AP the WLC places an entry for that client in its client database. This entry includes things like the client's MAC, IP address, security context, which AP it is associated with, etc. The controller then uses this information to forward frames and manage traffic to and from the wireless client. In particular, the client entry is a key component of the roaming feature.
- Three types of roaming are supported:
2. Inter-controller: The client entry is moved from one controller to another as the client associates with an AP assigned to a different controller than the one previously used.
3. Inter-subnet: It's possible to roam to a different IP network but it requires a little bit of extra work by the controllers and some tunneling magic. The client entry is copied to a new controller and marked as a foreign entry. In the old controller it's now an anchor entry. Since the client keeps its IP address, inbound traffic is routed to the old controller and must then be tunneled to the new one. Outbound traffic must be tunneled to the new controller. The result of this is that routing will be asymmetric and you need to make sure that both controllers have the same network access privileges. Source-based routing and firewalls can also cause issues with this kind of inter-subnet roaming.- For roaming to be supported, you need to configure something called mobility groups. A mobility group is WLCs with the same mobility group name basically, and this allows them to share client information and forward traffic to one another. This must be manually defined, and a WLC can only belong to a single mobility group. Certain requirements exist for how the controllers are configured if they are to be joined in a group. For example, they must use the same LWAPP L2/L3 transport mode and have IP connectivity to the other controllers, and you must configure each WLC with the IP and MAC address of all the other WLCs in the group.
- Lightweight APs can operate in a few different modes.
Monitor mode: the AP no longer handles data traffic and instead they cycle through channels, acting as sensors.
REAP mode: Remote Edge Access Port enabled a LAP to reside across a WAN link and still be able to communicate with the WLC and provide the functionality of a regular AP.
Rogue detector mode: Allows the LAP to monitor rogue APs. These APs should be able to see all VLANs in the network since rogue APs can be connected anywhere.
Sniffer mode: An AP in this mode captures and forwards traffic to a remote machine that runs a third party software called Airopeek.
Configuring Switches for Wireless- An autonomous AP, i.e. one that isn't attached to a controller, typically requires a trunk switch port because each SSID that you use will be connected to a single VLAN. Having multiple SSIDs then naturally requires a trunk link. This kind of design is limited by the fact that using a single SSID over a large area would require you to span that VLAN across a larger number of devices than what you would be comfortable with.
- Since traffic between the LAP and the WLC is tunneled, there’s no requirement that the LAP is connected to the same VLAN as the users of it are. The mapping of an SSID to a VLAN happens at the controller instead. The result of this is that you can use the same SSID in an entire campus without having to span VLANs across a large number of devices.
- LAPs require IP addressing information. In most networks this is provided by a dedicated DHCP server, but using the IOS is also possible. The big caveat with DHCP and lightweight APs is that you need to deliver the controller addresses somehow. This is done with DHCP option 43 which is an optional DHCP feature where you are able to encode vendor specific information. The addresses of the WLCs are placed in this field.
-
fredrikjj Member Posts: 879Chapter 10: Quality of Service and Advanced Catalyst Services
- Quality of Service
- Configuring Switch Ports for VoIP.
- Power over Ethernet
Quality of Service- IP voice calls create isochronous traffic flows with fixed data rates. Isochronous means that packets should arrive in a particular order and in fixed time intervals. This type of traffic doesn't tolerate delay, jitter or packet loss very well. Interactive video, e.g. a video conference, has similar requirements. Traditional data traffic is different in that it often has much greater tolerance for delay and packet loss; resending a lost packet during a file transfer is viable, but doing so during a voice session doesn't make sense for obvious reasons.
- A typical campus network is highly oversubscribed. A block of a few hundred gigabit access ports might only have a 1 to 2 Gbps uplink to the distribution layer, or possibly 10 Gbps if there is some special need for it. This works because most office users have very low average bandwidth requirements. Even if this oversubscribed access layer works well most of the time, there's still temporary congestion to contend with. There's also the issue of running voice over slower WAN links. Quality of Service (QoS) is a feature that allows you to put traffic into classes, and give certain classes preferential treatment.
- There are three models for QoS that we need to be aware of: Best-effort, Integrated Services and Differentiated Services. Best-effort is the default and all traffic is treated equally, i.e. there is no QoS. Integrated Services is a way to provide guarantees per flow by making bandwidth reservations through a signaling protocol called RSVP. This apparently scales very poorly and isn't covered any further in this book. Differentiated Services works by defining different classes of traffic by manipulating bits in the Ethernet and IP headers. Each individual device then discriminates based on these classes.
- You can use DiffServ style QoS with both Ethernet and IP as both of these headers have bits that can be manipulated for QoS purposes. In the Ethernet header, the 3 bit user priority field in the 802.1Q tag is manipulated. ISL also has a similar field, but ISL trunks are rarely used anymore. Since the field is 3 bits, there are 8 possible values; 0 to 7, which gives you 8 possible traffic classes. These bits are often called the “Class of Service” bits.
- With IP things get a bit more complicated. The field used for classification is an 8 bit field called Types of Service (ToS). Originally, only the first 3 bits, called the IP Precedence bits, were used, offering 8 possible values or classes. The remaining 5 bits in the field were used for other things outside of our scope. The first 6 bits of the ToS field were then redefined into the Differentiated Services Code Point (DSCP). It still has the same location in the IP header, but you now have 6 bits available for QoS purposes instead of just 3. This new model is backwards compatible with IP Precedence based on the first 3 bits.
- The first 3 bits of the 6 bit DSCP define the Class Selector (CS). The CS is the equivalent of the old IP Precedence value. The remaining 3 bits are used to define Drop Precedence to allow for prioritization of the packets within a CS. When traffic is queued, packets with higher drop precedence are more likely to get dropped. The result of this is a unique decimal value for every combination of CS and DP. However, only CS 1-5 are used to classify user traffic, and Drop Precedence is only used with CS 1-4. CS5 is the 'critical' class, and what voice traffic uses by default. CS 6 and 7 is used for control plane traffic. CS 0 is the default no QoS setting of the bits.
- It might be easier to visualize this by looking at this table:
Class Selector
Binary of first 3 bits
Low Drop Prec.
Medium Drop Prec
High Drop Prec.
0
000
Default, no QoS
1
001
AF11=001 010
AF12=001 100
AF13=001 110
2
010
AF21=010 010
AF22=010 100
AF23=010 110
3
011
AF31=011 010
AF32=011 100
AF33=011 110
4
100
AF41=100 010
AF42=100 100
AF43=100 110
5
101
EF. Uses 101 110 (46 in decimal). Highest priority user traffic.
6
110
Internetwork control
7
111
Network control- Note that the first 2 bits in the drop precedence are used to define the low/medium/high priority. The last bit doesn't seem to be used. AF stands for Assured Forwarding and EF stands for Expedited Forwarding; these are just names given to these different service levels.
- When a switch is oversubscribed it's possible for the traffic coming from several ingress ports to exceed the bandwidth of the egress port. For example, 3 gigabit ports, fully utilized, trying to exit the switch through a single gigabit port. When that happens, the egress port's buffer begins to fill up, resulting in packet loss if the oversubscribed load is sustained for too long. It can also result in head of line blocking (HOLB); as the egress port's buffer fills up, the switch instructs in the ingress port to start buffering as well. When that happens, traffic arriving at that ingress port will get buffered even if it is destined to another non-congest port on the switch. If these issues are temporary in nature, a QoS policy can improve the performance of sensitive data, like voice traffic, at the expensive of other user data.
- The first step in when implementing of QoS on a Catalyst switch is to define your trust boundary. When QoS is enabled, all ports are considered untrusted by default. An untrusted port will override any incoming markings and by default it will set the Class of Service to 0 and the DSCP will also be set to 0 based on the CoS to DSCP map table. In other words, QoS will be disabled at this boundary unless the untrusted port is manually configured to override with other values. A trusted port will not modify incoming frames because it is assumed that the QoS classification is legitimate. This results in considerably less administrative overhead because classification only needs to be done at the edge.
- In a typical VoIP deployment, all switches and phones are trusted, but other end-user devices are not. This prevents someone from applying QoS to normal user traffic by manipulating the headers before the frames are sent to the switch.
- Once the trust boundary is defined, Cisco apparently thinks that we can't handle anymore QoS at the CCNP level. There's a command called auto qos that's basically a macro for quickly implementing some kind of default QoS scheme. Of course, no one in the right mind would implement a feature with a macro without fully understanding the components of it, so what's the point. The textbook briefly mentions policing, marking, and congestion management and avoidance, but says that they are outside of the scope of the exam.
Configuring Switch Ports for VoIP- IP phones are typically connected to the same port as the user's computer. This is possible because the phone essentially has a 3 port switch built in where one port goes to the switch, one to the user PC, and one to the the internal VoIP device. You could carry the voice traffic on the same VLAN as the user data traffic, but that's usually not how things are done. Instead a special voice vlan is defined on the port in addition to the normal access vlan.
- A basic voice vlan interface configuration might look something like this:
- switchport mode access
- switchport access vlan 100
- switchport voice vlan 200
- The switch informs the phone of the voice vlan-id through CDP. That in turn makes the phone add an 802.1Q tag to each frame matching that vlan, and sets the class of service bits to 5 for QoS purposes. While that is the usual way of doing things, the voice vlan command does have some other parameters that lets you do things differently.
- switch(config-if)#switchport voice vlan dot1p //This command instructs the phone to send frames tagged with a vlan-id of 0 and a CoS of 5. The purpose of this is to enable QoS on the voice traffic without having to use a voice vlan. Traffic will be going into the access VLAN
- switch(config-if)#switchport voice vlan none //This is the default mode. No information about voice vlans is transmitted to the phone by CDP. Voice traffic is sent untagged into the access vlan.
- switch(config-if)#switchport voice vlan untagged //I don't really understand the difference between this command and the [none] command. Neither the configuration guide nor the textbook are particularly clear. Apparently, with this command CDP instructs the phone to send untagged traffic, but if untagged traffic is already sent by default, why do I need this command?
Power over Ethernet- Provides electrical power from the switch port to devices such as access points or IP phones. This is extremely convenient because it removes the need for a separate power cord and jack for these devices. There's a Cisco standard and an open standard for PoE; Cisco Inline Power and 802.3af-2003.
- In 802.3af a port that supplies power is called a PSE (power source) and a device receiving power is called a PD (powered device). There are two ways of delivering power to a PD; through a PoE enabled switch port, or by a mid-span PSE that can be used in the event that the switch doesn't support PoE. Five different power classes are defined, with the default being 15.4W per device. Cisco's ILP standard was the basis for 802.3af and it has some extensions that use CDP.
- Cisco ILP and 802.3af uses different methods to discover the PD. In the case of 802.3af, a DC current is applied and if the resulting resistance measures 25 KΩ, the connected device is a valid PD. Cisco ILP uses an alternate current in conjunction with a low-pass filter that allows the phone discovery signal to loop back to the switch, but prevents frames from passing between the receive and transmit pairs. Either method can be used with a Cisco phone, and once powered up, it will tell the switch how much power it needs via CDP.
- The 802.3af standard says that power may be delivered using either the active data wires or the spare wires – either scheme can be implemented. I'm assuming that this isn't relevant when using gigabit Ethernet since that uses all 4 pairs. Cisco ILP uses the active data pairs, and the default allocation is 10W but it can be adjusted by CDP.
- Power must be disconnected when the PD is removed because another non-PoE device might be plugged in immediately after. Cisco ILP and 802.3af have slightly different ways of handling this. Cisco ILP relies on the link status, and if it goes down, power is turned off. 802.3af detects the DC or AC current and if certain values fall below particular thresholds, the power is turned off.
- Cisco switches support both Cisco Inline Power and 802.3af-2003, and by now perhaps also the latest standard 802.3af-2009, aka PoE+. Configuration of PoE involves just a few commands and the main ones are switch(config-if)#power inline [various options] and switch#show power inline [options].
-
fredrikjj Member Posts: 879With that QoS post, I've covered all the theory. What remains is review, and getting some more hands on experience to make sure that I score 100% on the sims. Three more weeks should be plenty of time I think to get ready.
-
ccnpninja Member Posts: 1,010 ■■■□□□□□□□Thank you Fredrik for these notesmy blog:https://keyboardbanger.com
-
fredrikjj Member Posts: 879I've created a plan for the next few weeks.
Each day:- Reread a textbook chapter.
- Reread my notes and assess if they are a good representation of the material. Make adjustments if necessary.
- Read the relevant chapters in the 3560 configuration guide and configure all the features (this sounds worse than it is - I'd say that only 500 of the 1,300 pages in that document are ccnp switch stuff).
- As I get closer to the exam, do some "full scale" labs.
Right now I feel like I'm pretty strong on vlan,stp,etherc,fhrp, but very weak on configuring certain security features and QoS. For example, if someone asked me about dynamic arp inspection, I would be able to describe the feature, but not configure it. That's obviously not acceptable.Thank you Fredrik for these notes
I'm glad you liked them. -
joetest Member Posts: 99 ■■□□□□□□□□Hi Fredrik, I see you had some issues with voice vlan untagged|none keywords
This blog post helped me understand it better - when to use which Confusion about voice vlan | Tom G CCIE Blog
Perhaps you'll find it useful as well.
Regards -
fredrikjj Member Posts: 879So I had my first rack rental session today and managed to cover Port Security, DHCP Snooping, Dynamic ARP Inspection and IP Source Guard. The main issue I faced was the option 82 nonsense with the IOS DHCP Server. Apparently, the option 82 feature isn't supported by IOS DHCP but it's enabled by the default when DHCP Snooping is enabled. It must be disabled for clients to receive IP addresses. It was hard to troubleshoot because my DHCP knowledge is extremely limited, and the lab I followed in Switch Simplified didn't explain this problem at all, and confused me even more because they explicitly ENABLED option 82 in their solution (it's on by default so that doesn't make sense). I had hoped to get more done, but what can you do...
-
fredrikjj Member Posts: 879Today I spent a few more hours on real hardware, configuring VACLs, PVLAN and some minor features like Protected Ports, Storm Control, Port Blocking and 802.1X. My weekend entertainment was working my way through the entire FHRP configuration guide with GNS3. I also solved the INE CCNP Workbook for BCMSN (previous version of SWITCH), and it felt pretty easy. My conclusion after this configuration binge is that I'm pretty much ready for the exam. It's a huge relief, and I'll schedule the exam for sometime soon.
-
mistabrumley89 Member Posts: 356 ■■■□□□□□□□Goodluck!Goals: WGU BS: IT-Sec (DONE) | CCIE Written: In Progress
LinkedIn: www.linkedin.com/in/charlesbrumley -
fredrikjj Member Posts: 879I've scheduled the SWITCH exam for Friday. I'd say that I'm better prepared than for ROUTE so I'd be surprised if I fail. I'm very confident on the actual technologies, except perhaps 802.1X that I haven't really tested beyond enabling it on a switch and pointing to an imaginary RADIUS server. Still, I worry because I could see myself failing due to some simulation issues or weird planning questions that I don't know how to prepare for.
-
Danielh22185 Member Posts: 1,195 ■■■■□□□□□□Good luck to ya!Currently Studying: IE Stuff...kinda...for now...
My ultimate career goal: To climb to the top of the computer network industry food chain.
"Winning means you're willing to go longer, work harder, and give more than anyone else." - Vince Lombardi -
late_collision Member Posts: 146I've scheduled the SWITCH exam for Friday. I'd say that I'm better prepared than for ROUTE so I'd be surprised if I fail. I'm very confident on the actual technologies, except perhaps 802.1X that I haven't really tested beyond enabling it on a switch and pointing to an imaginary RADIUS server. Still, I worry because I could see myself failing due to some simulation issues or weird planning questions that I don't know how to prepare for.
Good Luck fredrikjj! I am in about the same position as you, in review mode. I am very interested to hear about your exam experience and whether you feel the simplified book covered enough detail for the exam.
I'm not sure how deep the CCNP exam will go with 802.1x, but I ended up building a RADIUS server in my test lab. In the process, I learned more about RADIUS than I did about 802.1x. For the exam, I am of the opinion that if you can remember the few commands covered in the book and default states, you're probably good to go.
I know the exam curriculum places a heavy emphasis on Spanning Tree, but I feel like I'm going to get absolutely hammered with security and FHRP concepts. -
fredrikjj Member Posts: 879I'm not sure how deep the CCNP exam will go with 802.1x, but I ended up building a RADIUS server in my test lab. In the process, I learned more about RADIUS than I did about 802.1x. For the exam, I am of the opinion that if you can remember the few commands covered in the book and default states, you're probably good to go.
Good, that's what I should have done, but I couldn't justify the time investment for what's probably a very minor topic. Anyway, the books are very specific on the 5 step process required to activate basic 802.1X, and I know those steps.I know the exam curriculum places a heavy emphasis on Spanning Tree, but I feel like I'm going to get absolutely hammered with security and FHRP concepts.
The exam does seem to place a disproportionate emphasis on "high availability" (19% I think) in relation to how big of a topic the FHRPs and supervisor redundancy is. I've read/configured the extended FHRP configuration guide and either I'm missing something, or I'll score a bunch of easy points there.
I guess we'll see tomorrow... -
blackhawk364 Registered Users Posts: 3 ■□□□□□□□□□Thanks Fredrik for the notes and good luck with the exam.
I just passed ROUTE today (2nd attempt) and will start studying for the SWITCH by reading your notes first -
fredrikjj Member Posts: 879I passed! I've had my victory burger and beers, and I'm randomly fist pumping whenever it becomes impossible to contain my excitement
My review of the actual exam is less positive however. If someone told me that I would get a million dollars if I scored 950+ on ROUTE, I know I could do it because I know why I lost points on that exam. I scored 868 on SWITCH and I honestly don't really know where I lost all those points. The simulations were even more basic than the ones the routing exam, but the language on some of questions was so very convoluted, and I even left my first ever comment which just said "this question doesn't make sense" (it literally didn't make sense). The "planning" questions were dumb, as expected, and it felt like there were more of them here than on ROUTE. I'm glad I passed, but I would have been really pissed if I had failed. From the perspective of just passing the exam, all that time spent going above and beyond the requirements by reading configuration guides etc was useless.blackhawk364 wrote: »Thanks Fredrik for the notes and good luck with the exam.
I just passed ROUTE today (2nd attempt) and will start studying for the SWITCH by reading your notes first
Ha! Good luck! Personally, I think that my spanning tree notes are the most worthwhile (minus some spelling mistake and so on that I haven't fixed). The rest doesn't really use any other sources than CCNP SWITCH SIMPLIFIED. -
fredrikjj Member Posts: 879For TSHOOT I will:
1. Refresh routing stuff. Read notes, do some labs, etc.
2. Read and take some notes from the TSHOOT OCG.
3. Learn the exam topology.
3-4 weeks seems like an appropriate amount of time to me.Congrats on the pass!
Thanks! -
joetest Member Posts: 99 ■■□□□□□□□□grats you give me hope for my Route exam.
If the switch is harder than Route then yeeh for me
now go nail that TSHOOT -
fredrikjj Member Posts: 879I built the TSHOOT topology in IOU which should make it quite easy to become familiar with it. I also ordered the TSHOOT OCG. While waiting for the book I'll read some routing notes and do some labs, something I'm actually almost looking forward to.Congrats fredrikjj!
Thanks for taking time to review the exam too.
Thanks!grats you give me hope for my Route exam.
If the switch is harder than Route then yeeh for me
now go nail that TSHOOT
Thanks!
I get the impression that it's a minority opinion that route is easier, and at the end of the day, it doesn't really matter. -
Danielh22185 Member Posts: 1,195 ■■■■□□□□□□Congrats on the pass! You are almost there!Currently Studying: IE Stuff...kinda...for now...
My ultimate career goal: To climb to the top of the computer network industry food chain.
"Winning means you're willing to go longer, work harder, and give more than anyone else." - Vince Lombardi -
fredrikjj Member Posts: 879The idea to refresh the entire ROUTE material in one week turned out to be quite ambitious, and I'm not really getting the week off like I deserve. I've done most of the EIGRP and OSPF stuff at this point, and while skill fade is real, I can still solve the same labs. I'm also getting great mileage out of my notes.
One thing that occupies my mind is what to do after TSHOOT. I know that I'm going to read TCP/IP by Comer, that's the only thing that's set in stone. I would like to do some kind of Wireless, but it's hard to virtualize an AP, and the equipment needed for something like CCNP Wireless seems completely out of reach for me unless I get some company to help me out. CCNA Security seems like the lowest hanging fruit, especially since I could probably install an ASA 1000v instead of having to buy a real ASA (don't suggest ASA on GNS3 - it's junk). I also already own the books for it. CCNA Sec or not, I need to get stronger on IPSec/VPN/firewalls.
I also want to do "CCIE light" where I spend six months or so reading the relevant books and solve INE volume 1 (technology specific labs), but don't try to push it to the level where I think that I would be ready to pass the lab. Looking at the contents of that workbook, I could solve almost everything on EIGRP/OSPF/BGP/L2 already, but I would need a lot of work on multicast, mpls, qos, ip services, security (and DMVPN when the updated workbook gets released). Six months might not even be enough even if I go hard.
Then there's the entire SDN thing to contend with, and that some people think that Cisco are in decline. It's SO HARD to determine how much of it is people trying to sound smart and cutting edge on blogs, and how much of it is actually real. Either way, I'm not at all sure how much it matters to someone at my level of skill, and if it matters to me, what do I do?Danielh22185 wrote: »Congrats on the pass! You are almost there!
Thanks -
bharvey92 Member Posts: 420 ■■■□□□□□□□Day 34
Re: the talk about network types yesterday. I've now figured out the purpose of point-to-multipoint. In a full mesh topology, having a DR isn't a good idea because if a router loses it's direct layer 2 connection to the DR, it loses its ability to send LSAs to that DR despite still having an indirect path to it via other routers. You would still be able to ping the DR in that scenario, but OSPF won't function properly.
In a hub-and-spoke this doesn't matter because if the connection to the DR is lost, there's no other paths anyway, but If you are paying extra for a full mesh you want the network to still be functional even if one specific circuit goes down. Point-to-multipoint creates the magic sauce that makes that happen.
I also did some debugging on the adjacency process, trying to break it in various ways.
How the adjacency process works:
(debug output is from 'debug ip ospf adj')
1. The router sends hello packets out the interface when OSPF is enabled. Before the router has received a hello packet from another router it's in the DOWN state.
2. When the router receives a hello packet from another router it goes into 'INIT'
OSPF: Rcv DBD from 0.0.0.2 on FastEthernet1/0 seq 0x20B opt 0x52 flag 0x7 len 32 mtu 1500 state INIT
The FLG says a hello packet makes a router go into INIT, but this output clearly says DBD which must stand for database descriptor, packet type 2. The DBD describes the LSAs in a router's database. Now, this could mean that the debug output says DBD but it was actually a hello packet that was received. Or, it was an actual DBD packet. Either way, it's mildly confusing.
3. When the router receives a hello from another router it adds that router's router-id to its own hello packets. Once a router receives a packet with its own router-id it knows that it has established communication with another router and goes into the two-way state.
OSPF: 2 Way Communication to 0.0.0.2 on FastEthernet1/0, state 2WAY
4. At this point, if the network type is broadcast, there's a DR/BDR election according to priority settings, or highest router-id if the same priorities.
OSPF: Elect BDR 0.0.0.1
OSPF: Elect DR 0.0.0.2
5. The routers now want to synchronize their databases.
FastEthernet1/0 Nbr 0.0.0.2: Prepare dbase exchange
OSPF: Send DBD to 0.0.0.2 on FastEthernet1/0 seq 0x10B7 opt 0x52 flag 0x7 len 32
OSPF: Rcv DBD from 0.0.0.2 on FastEthernet1/0 seq 0x1103 opt 0x52 flag 0x7 len 32 mtu 1500 state EXSTART
NBR Negotiation Done. We are the SLAVE
The router on the other side has become MASTER. The FLG doesn't go into specifics as to what the significance of this master/slave relationship is.
6. The actual exchange takes place
OSPF: Rcv DBD from 0.0.0.2 on FastEthernet1/0 seq 0x25E3 opt 0x52 flag 0x1 len 72 mtu 1500 state EXCHANGE
OSPF: Exchange Done with 0.0.0.2 on FastEthernet1/0
OSPF: Send LS REQ to 0.0.0.2 length 12 LSA count 1
OSPF: Send DBD to 0.0.0.2 on FastEthernet1/0 seq 0x25E3 opt 0x52 flag 0x0 len 32
OSPF: Rcv LS UPD from 0.0.0.2 on FastEthernet1/0 length 64 LSA count 1
OSPF: Synchronized with 0.0.0.2 on FastEthernet1/0, state FULL
7. The database is now in synch with the neighbor and they are both in the 'FULL' state.
//Mismatched timers
So why even care about this type of debugging? After all, as long as you double check that the variables that must match do in fact match (area, hello/dead intervall, authentication, stub flag), shouldn't things just work? Yes, but for example, what if you can't log in to the other device because it's managed by someone else. If all you got is show ip ospf neighbor, and that shows you nothing, what do you do? You could call that other administrator up and say "IT IS NOT WORKING!", or you could go deeper.
I configured a hello intervall of 9 seconds instead of 10 on the other router. debug ip ospf adj won't give you anything, and show ip ospf neighbor gives you this, incredibly useful output:
R1#show ip ospf neigh
<blank>
R1#
I
nstead, let's turn on debug ip ospf hello.
OSPF: Rcv hello from 0.0.0.2 area 0 from FastEthernet1/0 1.1.1.2
OSPF: Mismatched hello parameters from 1.1.1.2
OSPF: Dead R 36 C 40, Hello R 9 C 10 Mask R 255.255.255.0 C 255.255.255.0
We're able to determine that the remote hello timer of 9 doesn't match ours (and dead the timer of 36 - it defaults to 4 times the hello). Now, instead of calling and saying "It's not working" you'd be able to say "it seems like the hello timers you guys told us to use don't match what you are actually running on your end". You'll sound like you know what you are talking about and the issue will probably be resolved faster. And even if you had access to the other device, this would probably be quicker than actually double checking all the parameters manually.
Another interesting thing I found when I tried to deliberately break the adjaceny process is that a point-to-point interface will play with a broadcast interface. At least on the point-to-point ethernet segment I'm using for testing. I suppose I shouldn't be surprised since it's not mentioned that those need to match. Don't ask me if this has any implications.
//Area mismatch
Next I messed around with area mismatches. OSPF won't be able to form an adjaceny if the area # doesn't match. The problem is that the debug ip ospf hello command we used previously is insufficient.
R3(config-if)#
*Oct 24 04:41:55.639: OSPF: Send hello to 224.0.0.5 area 2 on Serial1/0 from 10.0.0.2
R2(config-if)#
*Oct 24 04:43:34.359: OSPF: Send hello to 224.0.0.5 area 1 on Serial2/0 from 10.0.0.1
No hellos are received according to this output, which I find suspect to be honest.
If we go back to debug ip ospf adj, we're informed of the problem immediately:
R3#debug ip ospf adj
OSPF adjacency events debugging is on
R3# OSPF: Rcv pkt from 10.0.0.1, Serial1/0, area 0.0.0.2 mismatch area 0.0.0.1 in the header
Another failure scenario in the adjacancy process that I came up with is if one side of the link is non-broadcast and they forgot to tell you. How would you troubleshoot this? You probably won't beacause it'll still work. Presumably because the broadcast side of the link will multicast its IP address, and then the other side will unicast the response. But, if you configure both sides of the link to non-broadcast, they won't become adjacent even if you can ping 224.0.0.5 just fine.
//Mismatched stub flag
Seen with debug ip ospf hello
OSPF: Rcv hello from 0.0.0.2 area 2 from Serial1/0 10.0.0.1
OSPF: Hello from 10.0.0.1 with mismatched Stub/Transit area option bit
debug adjaceny won't show anything.
Why is there such inconsistency in what debug commands show what when it comes to the adjaceny process?
//authentication.
debug ip ospf adj will show you mismatched authentication
OSPF: Rcv pkt from 10.0.0.1, Serial1/0 : Mismatch Authentication type. Input packet specified type 2, we use type 0
type 0: null
type 1: simple
type 2: md5
If we have the right type, but the wrong key we get:
OSPF: Rcv pkt from 10.0.0.1, Serial1/0 : Mismatch Authentication Key - No message digest key 1 on interface
//LSA
I've pretty much memorized the LSAs by repeatedly drawing diagrams of their propagation. I'm a little bit confused about type 2 LSAs though. Why do we need them when we have type 1s? The FLG is fairly sparse on details, but from what little information is there, I've imagined that type 1 LSAs don't actually contain the full information about transit networks, and only point to the DR. The type 2 LSA then informs the router about the actual full details about that network, should it be required. Is this how it works?
This is a brilliant method to learn! I've never thought of this and always struggled in the CCNA exam with when things go wrong!2018 Goal: CCIE Written [ ] -
fredrikjj Member Posts: 879Just don't take the information too seriously because a few of the things I wrote there seems kind of suspect. In particular the point to point network type working with the broadcast network type. I probably didn't investigate the actual database exchange properly when I wrote that. The fact that debug ip ospf adj won't show every adjacency related problem is kind of useful information however.
-
bharvey92 Member Posts: 420 ■■■□□□□□□□Thanks. I just found in general the concept of breaking things in lab to view the error messages practically is very useful. Something I've never stupidly done!
Good luck with the rest of your stuides!2018 Goal: CCIE Written [ ] -
blackhawk364 Registered Users Posts: 3 ■□□□□□□□□□Afaik, multicast,mpls and qos won't be in the new lab or am I wrong?
-
fredrikjj Member Posts: 879I spent 10 hours or so working on BGP this weekend, and I have to say that the BGP labs in the '101 CCNP Labs' book are very good. Mainly because they are close to impossible if you don't know how the protocol works, and really fun and neat if you do. I feel like you can kind of wing it with some of the CCNP-level OSPF and EIGRP labs, but BGP has a tendency to just not work if you don't know what you are doing. Though, some of the stuff they cover like confederation, route reflectors and iBGP only topologies is kind of outside the scope of the exam.
I still haven't received my TSHOOT book, but I feel like I'll probably be ready for the exam once I've done some IPv6 labs and configured the TSHOOT topology a few times. Looking at that topology, NAT is probably the one thing that I need to look at because I don't think I've used it since the CCNA. Hell, I don't even have work experience with it since we used public ip addresses (10,000+ of them) only where I worked.Thanks. I just found in general the concept of breaking things in lab to view the error messages practically is very useful. Something I've never stupidly done!
Good luck with the rest of your stuides!
Totally, plus, interpreting debug output is actually on the blueprint for the exams so it's a good idea for that reason too.Afaik, multicast,mpls and qos won't be in the new lab or am I wrong?
I'm assuming that this was posted in the wrong thread? -
blackhawk364 Registered Users Posts: 3 ■□□□□□□□□□
I'm assuming that this was posted in the wrong thread?
LOL, I meant the new CCIEv5 does not include multicast, mpls and qos; therefore you might want to remove these topics from your "CCIE light" studies.