Avaya to be a big runner up to Cisco?

OfWolfAndManOfWolfAndMan Member Posts: 923 ■■■■□□□□□□
I don't know if this is a rumor or not, but as some may know, Juniper is the big one trying to get out there these days, offering free vouchers to certain companys' employees (And with a cheap voucher price as well). I've heard a few things here and there about Avaya potentially becoming another in the network infrastructure as well as in VoIP phones. If so, how close around the corner is it?
:study:Reading: Lab Books, Ansible Documentation, Python Cookbook 2018 Goals: More Ansible/Python work for Automation, IPSpace Automation Course [X], Build Jenkins Framework for Network Automation []

Comments

  • jdballingerjdballinger Member Posts: 252
    See I've been hearing the opposite from our voice folks at work, that Avaya is on the ropes and in trouble. I haven't researched it myself, so who knows...
  • IristheangelIristheangel Mod Posts: 4,133 Mod
    Avaya isn't really much of a competitor at ALL. I've been seeing them slowly dying off in the VoIP space and I've never seen their switches in the wild. I'd say that Arista is more of an infrastructure competitor to Cisco
    BS, MS, and CCIE #50931
    Blog: www.network-node.com
  • philz1982philz1982 Member Posts: 978
    Cisco is moving towards a services play as service revenue is more predictable then tech refreshs and product. You can see this in their IoE play at C-Live.

    Avaya is a great cost competitor to Cisco. I see them getting involved in a lot of infrastructure projects we do. Avaya's presence is more regional though Iris and I would expect to see more Cisco in Ca. I see a fair amount of Avaya in the SE and in the NE.

    All these products are starting to get commoditized its the services and unified solutions that are differientating the hardware.
  • it_consultantit_consultant Member Posts: 1,903
    Avaya is one of the few vendors to have an ethernet fabric switch that is nearing Gen 2 (it uses the standard shortest path bridging instead of TRILL) and people have been saying that Avaya has been "dying off" for years. Don't believe it. They have never had a huge presence in ethernet switching but sort of like Alcatel-Lucent, you would be surprised where you might find an Avaya switch. I wouldn't spend a ton of time learning Avaya switching since, lets face it, most switches are extremely similar to each other. If you like making money and you have a thing for Avaya, learn their call center and middleware products. They are far from regional, they are a global phone player in both IP and digital systems. They own 25% of the IP phone market and some ungodly share of the digital phone market (yes there is still a digital phone market).

    From an anecdotal standpoint I have met very few people that were happy when they moved from Avaya to Cisco. Cisco is cheaper than Avaya, often from 35-40% when you include all the licensing and installation/migration. Put another way, I didn't know phone systems had downtime until I stopped using Avaya and Nortel products and started using Cisco. We are doing a migration from UCCM 8 to 9 and it took our reseller three tries to do the migration in test because the Cisco documentation is incomplete and people tend to wait a long time to upgrade phone systems so there isn't a ton of experience upgrading Cisco from one version to another. In this case there was a capital letter somewhere that prevented the upgrade from moving forward. Before that it failed because of some issue in the Linux version that 8 was running. This is unheard of in installations of Avaya or even smaller players like Shoretel.
  • wes allenwes allen Member Posts: 540 ■■■■■□□□□□
    We are a good sized Avaya shop, and the netops guys love the gear - highly virtualized, and I believe running shortest path bridging as well.
  • OfWolfAndManOfWolfAndMan Member Posts: 923 ■■■■□□□□□□
    I am familiar with Avaya's Promina, but that's about it. I was just curious because I have been a Cisco guy for like five years, and just had a massive curiosity who will be out there in a few years when it comes to routing and switching. Our shop just upgraded from some some equipment that was up to almost two decades old and I knew it'd be better to ask someone who's inventory is a little more modernized:D. As for voice, I'm not really curious. CUCM isn't my forte :). Any other big boys getting out there?

    Iris, I'll check out Arista
    :study:Reading: Lab Books, Ansible Documentation, Python Cookbook 2018 Goals: More Ansible/Python work for Automation, IPSpace Automation Course [X], Build Jenkins Framework for Network Automation []
  • --chris----chris-- Member Posts: 1,518 ■■■■■□□□□□
    I am familiar with Avaya's Promina, but that's about it. I was just curious because I have been a Cisco guy for like five years, and just had a massive curiosity who will be out there in a few years when it comes to routing and switching. Our shop just upgraded from some some equipment that was up to almost two decades old and I knew it'd be better to ask someone who's inventory is a little more modernized:D. As for voice, I'm not really curious. CUCM isn't my forte :). Any other big boys getting out there?

    Iris, I'll check out Arista

    I work in a 95% avaya environment. My networking buddy grumbled one day "Avaya? Cisco? They all use the same damn commands". I would assume Avaya is the cheaper of the bunch (since that's the standard for infrastructure upgrade here) and they can hire Cisco nerds to run it. They like Avaya here.
  • it_consultantit_consultant Member Posts: 1,903
    I am familiar with Avaya's Promina, but that's about it. I was just curious because I have been a Cisco guy for like five years, and just had a massive curiosity who will be out there in a few years when it comes to routing and switching. Our shop just upgraded from some some equipment that was up to almost two decades old and I knew it'd be better to ask someone who's inventory is a little more modernized:D. As for voice, I'm not really curious. CUCM isn't my forte :). Any other big boys getting out there?

    Iris, I'll check out Arista

    Arista uses a leaf and spine architecture while Avaya (along with Brocade and Cisco) use a full mesh fabric design. In Arista, the leaf switches can connect to multiple spine switches in a non-blocking non-spanning tree form. In a full mesh you flatten that so all switches can connect to each other in any physical topology you desire. Leaf and spine is like half of a full fabric. I am not talking bad about Arista, they have some incredibly low latency switches and they are used in places like Wall Street trading firms where milliseconds count. For most people though, being able to plug their switches into peer switches any which way they like is handier than having to make sure you are connecting leaf to spine and not leaf to leaf.

    A true competitor to Cisco in the VOIP realm would probably be Shoretel. I see more Shoretel for green pastures VOIP installations than any other brand. There is some market penetration from NEC but not a lot. If you need to support digital, analog, and VOIP on the same system, your only options are really NEC and Avaya.

    p.s. Next time you are in an Arista switch type the command "Show Chickens" and "Show Farm".
  • Dieg0MDieg0M Member Posts: 861
    Arista uses a leaf and spine architecture while Avaya (along with Brocade and Cisco) use a full mesh fabric design. In Arista, the leaf switches can connect to multiple spine switches in a non-blocking non-spanning tree form. In a full mesh you flatten that so all switches can connect to each other in any physical topology you desire. Leaf and spine is like half of a full fabric. I am not talking bad about Arista, they have some incredibly low latency switches and they are used in places like Wall Street trading firms where milliseconds count. For most people though, being able to plug their switches into peer switches any which way they like is handier than having to make sure you are connecting leaf to spine and not leaf to leaf.
    Could you explain this? We run several Arista switches in Wallstreet and we use STP with a full mesh design. Are you talking about MLAGs? As far as I know Arista and Cisco can run a leaf and spine aswell as a full mesh design so I need some clarifications on this.
    Follow my CCDE journey at www.routingnull0.com
  • it_consultantit_consultant Member Posts: 1,903
    Unless the switches are running Shortest Path Bridging (SPB) or Transparent Interconnect of Lots of Links (TRILL) you are not running an Ethernet fabric. MLAGs and spine and leaf are similar but not the same. In SPB or TRILL spanning tree is completely irrelevant because if you plug in a switch to a switch it will form an ISL (inter switch link, ripped off from fiber channel networks) automatically and TRILL/SPB will automatically figure out the shortest path from point A to point B. As far as I know (and this may be old and Arista may now support TRILL Or SPB) a leaf switch can plug into multiple spline switches to provide redundant pathing but you cannot plug a leaf into a leaf without that port being blocked. This is why a fabric is typically referred to as "non-blocking" because none of your ISLs will ever enter a spanning tree blocking state.

    STP can still be run if, for example, you plug your fabric into a traditional switch. In my case (I run a Brocade fabric) the ports plugged into traditional switches run edge loop detection which will detect a broadcast storm coming from the traditional switch and shut down the port. However, my ports which plug VDX A to VDX B and VDX C and from C to A and C to B etc do not run STP.

    In leaf and spine we still have 2 levels of switches, in a full fabric it is only one level. The result, though, is similar. A high speed, low latency, full or partially non-blocking network.
  • OfWolfAndManOfWolfAndMan Member Posts: 923 ■■■■□□□□□□
    I am not too familiar with SPB, TRILL, or leaf and spine, unless they're vendor-specific terms and/or synonomous to more familiar terminology?
    :study:Reading: Lab Books, Ansible Documentation, Python Cookbook 2018 Goals: More Ansible/Python work for Automation, IPSpace Automation Course [X], Build Jenkins Framework for Network Automation []
  • Dieg0MDieg0M Member Posts: 861
    @it_consultant, I still don't understand, the Arista's are connected between eachother with 10G and 40G ports running STP. No TRILL or SPB involved.
    Follow my CCDE journey at www.routingnull0.com
  • it_consultantit_consultant Member Posts: 1,903
    I am not too familiar with SPB, TRILL, or leaf and spine, unless they're vendor-specific terms and/or synonomous to more familiar terminology?

    Ethernet fabrics are part of "datacenter bridging" which aims to converge storage and ethernet networks primarily by beefing up ethernet to become truly "Lossless". There are two organizations which define standards for this, IETF and IEEE. IETF supports using TRILL and IEEE supports SPB for enhancing ethernet under DCB.

    Data center bridging - Wikipedia, the free encyclopedia

    So far Cisco, Avaya, Juniper (I don't know anyone who actually uses Fabric Path), Alcatel-Lucent (again, I am not sure anyone is actually using this), Brocade, and soon possibly HP offer DCB and DCBx switches which use either TRILL or SPB. Coincidentally Cisco and Brocade both use TRILL so you can form an ethernet fabric with Cisco Nexus and Brocade VCS datacenter switches.

    What Cisco and Brocade did was take an existing storage switch and hack it together to do ethernet switching with many of the great features of storage networking. Storage guys think it is crazy we cant just connect switches together at will. They are shocked to hear we would actually allow ports to go into a "blocking" mode. When they learn that we prioritize ethernet traffic one of the options is to drop low priority traffic, they are stunned. In the storage world a dropped packet can ruin a database. They are trying to get ethernet to that point of reliability and congestion control so that phones, storage, and regular ethernet can function over the same network. Hence the term "data center bridging".

    Leaf and spine is vendor specific:

    http://www.enterprisetech.com/2013/11/04/arista-flattens-networks-large-enterprises-splines/

    Ethernet fabrics are somewhat standardized:

    http://www.techopedia.com/definition/28403/ethernet-fabric
  • it_consultantit_consultant Member Posts: 1,903
    Dieg0M wrote: »
    @it_consultant, I still don't understand, the Arista's are connected between eachother with 10G and 40G ports running STP. No TRILL or SPB involved.

    Then it isn't an ethernet fabric. Where this is possibly a problem is if you start actually filling your 10G and 40G ports with traffic. SPB and TRILL can direct traffic to ISLs with lower amounts of traffic without admin intervention. Sometimes this means taking a longer but less congested route through the fabric. It is difficult to explain because it is much more similar to a FC storage network. Using a LACP bond (which is what Arista uses in their MLAG) is great but it essentially just bonds two links and fills them evenly until they are full. In a fabric it is done much more intelligently since every switch in the network is aware of every other switch AND their link statuses which include how congested those ISL links are.
  • Dieg0MDieg0M Member Posts: 861
    OK. I stil don't understand what you mean by "For most people though, being able to plug their switches into peer switches any which way they like is handier than having to make sure you are connecting leaf to spine and not leaf to leaf." What are these limitations you are talking on the Arista's?
    Follow my CCDE journey at www.routingnull0.com
  • it_consultantit_consultant Member Posts: 1,903
    The Arista doesn't create automatic ISL's and there are rules on how you can connect your leaf and spine switches. In a fabric there are no rules and the ISLs configure automatically. That isn't to say that Arista is overly complex but it is hard to get easier than plugging it in and walking away. What to plug it into another switch for redundancy; plug it into another switch and walk away. Getting some traffic congestion? Plug in another link and walk away. In leaf and spine your leaf switches are non blocking to the spine but not to each other. In this architecture almost half of your potential bandwidth from port to port across the switching fabric would be blocked. In a fabric all switches are peers so it can take any path through the fabric.

    Lets give a scenario. Port 1 on leaf A needs to talk to port 2 on leaf B. It can go to directly connected spine A or directly connected spine B which is also directly connected to leaf B. Either way,you are going through a spine. In a TRILL fabric it is likely that I have plugged switch A into switch B so when port 1 on switch A needs to talk to port 1 on switch B it simply takes the shortest route through the fabric (in this case the directly connected link) as defined by TRILL or SPB. Keep in mind that in this fabric I have also plugged switch A into switch C and switch B is also plugged into switch C so in case the link between A and B is full or broken, TRILL will send the traffic through switch C.

    Look at Arista's document

    Link Aggregation (MLAG)

    They have a server with a LACP bond going from the server into the MLAG domain 1. This works properly. Now in a full fabric there is no MLAG domain, I can form a MLAG (in Brocade it is called a vLAG) between any two or more switches in the fabric. This is because there is a unified config layer across the entire fabric.

    http://www.brocade.com/downloads/documents/html_product_manuals/NOS_AG_300/wwhelp/wwhimpl/common/html/wwhelp.htm#context=53_1002561_02&file=CH_Configuring_LACP.28.3.html
  • Dieg0MDieg0M Member Posts: 861
    What model of Arista are you talking about? We run STP with other switches on the network just fine and it is plug and play like a Cisco or Avaya switch.
    Follow my CCDE journey at www.routingnull0.com
  • it_consultantit_consultant Member Posts: 1,903
    I don't think you are understanding what an ethernet fabric is compared to a spine and leaf or traditional switching. Arista is not the same as a Cisco or Avaya running TRILL/SPB as I explained before. If you are running STP on a port then there is a possibility that it will go into a blocking state to prevent a broadcast storm. In a spine and leaf a broadcast storm is unlikely but possible. In a fabric a broadcast storm is impossible because ISL ports run a different set of protocols than a traditional or leaf and spine switch. If I get dumb and plug two ports of a traditional switch into two parts of a fabric without setting up a LACP bond then either the traditional switch will block one of the ports or the fabric will do it but it isn't a STP block. There are no TCNs or anything. It runs a different protocol called edge loop detection. Since it is impossible for any ISL port to go into a blocking state, it is said that an ethernet fabric is "lossless". Therefore it is a more appropriate ethernet setup for things like FCOE where there is zero tolerance for packet loss.

    I am not slamming Arista, I think they make a great product without the Cisco markup and with more innovation than you get from HP. I am just pointing out the differences between the ethernet fabric that Avaya is now selling and the leaf and spine that Arista sells. Since Arista switches run generic silicon with a Linux kernel they are appropriate for SDN deployments and more importantly, older Arista switches could also be SDN capable through software upgrades which can't be said for switches that run proprietary ASICs.

    I love talking about this stuff and having now thoroughly researched Avaya, I might actually start recommending them as a competitive product to Brocade (depending on price) because it looks like they have a quality product.
  • Dieg0MDieg0M Member Posts: 861
    Thank you I understand now. It seems because you are from a Brocade environment you are arguing that the ethernet fabric is better than a spine and leaf fabric. The reality is that in Wallstreet we use the Arista switches for traditional switching only and do not use the spine and leaf design.
    Follow my CCDE journey at www.routingnull0.com
  • it_consultantit_consultant Member Posts: 1,903
    Actually what I was saying originally was not to dismiss Avaya because Avaya has an advanced ethernet technology. I used Brocade as an example because it is easy for me to find reference material from them since I use them. My intention is not to promote Brocade or bash Arista but to defend Avaya from the perception that they are a "dying company".
  • Dieg0MDieg0M Member Posts: 861
    Avaya, Juniper, Arista, Brocade... There is one more that we have not talked about that will potentially be number 2 in next years: Huawei. I have never seen a Huawei device deployed in production but I hear that they are very big in Asia/Europe and growing very fast.
    Follow my CCDE journey at www.routingnull0.com
  • it_consultantit_consultant Member Posts: 1,903
    HP,Enterasys, Ciena...
  • phoeneousphoeneous Member Posts: 2,333 ■■■■■■■□□□
    So is the only difference between spine+leaf and the Cisco hierarchical design is that with Cisco, your same-layer switches are connected to each other but with spine+leaf, the spines are independent of each other?
  • it_consultantit_consultant Member Posts: 1,903
    No the spines are peers, so you can plug those up with as many loops in the topology (as is my understanding, I might be wrong so someone else can chime in) between the spines and they will form a high speed non-blocking bus. The leafs can plug into as many spines as they like and those links will be non-blocking but if you connect leafs together that connection will block. According to Arista's documentation the only reason to run STP in this topology is for a grievous mis-configuration or if you plug an edge port (port not plugged leaf to spine or spine to spine) to a traditional switch which, by the very nature of being a traditional switch, is susceptible to broadcast storms.

    It might seem unnecessarily to set this up but the latency on Arista's leaf and spine topology is shockingly low. Way faster than a regular switch architecture, measured in nanoseconds quick. I am not sure how the latency compares to a TRILL fabric which is also incredibly low. TRILL and SPB is more for both speed and redundancy to mimic a storage fabric for lossless iSCSI and FCOE whereas a leaf and spine is really designed for ultra speed switching.
Sign In or Register to comment.