How to get here?

MitMMitM Member Posts: 622 ■■■■□□□□□□
Every now and then, I check the job boards to see what's out there. I have a good job, so I'm not actively applying for anything. I am trying to transition more into security. Part of my job includes network security (firewalls, nessus scans), but I came across this posting and thought, sounds interesting.....I started to wonder, what's the best way to get the knowledge for this type of position.
I'm studying for CISSP now (early stages), and even though it's listed in the job req, it's not going to help get the technical skills necessary

Is this something that I would want to invest in training from SANS for? Just curious

JOB SUMMARY:
The Senior Cybersecurity Engineer assists in deploying, maintaining, tuning, monitoring, and managing security tools related to the Realogy SOC. Senior Cybersecurity Engineer will function as a Level 2/Level 3 analyst, and act as a mentor to other members of the team. Senior Cybersecurity Engineer will review alerts from Level 1 analysts and Realogy network security devices, security information and event management (SIEM), and other tools as needed. Senior Cybersecurity Engineer works with other analysts to collect, correlate, and analyze security-relevant data, and respond to threats in a timely manner. This position reports to the SOC Director.

ESSENTIAL DUTIES AND RESPONSIBILITIES:
  • Monitor and respond to security events escalated by Level 1 security analysts and respond appropriately to protect Realogy information and assets
  • Utilize endpoint products such as Carbon Black to identify malicious activity on the network
  • Review daily and weekly reports generated by Level 1 security analysts for actionable tasks for the SOC team
  • Collaborate with the Cyber Threat Intel Analyst (CTIA) to ensure appropriate security incident management and threat response processes are followed
  • Provide technical expertise of security tool deployment and implementation supporting the SOC
  • Analyze SOC functions and recommend upgrades/changes to ensure the security of the Company
  • Continuously assess current state of security monitoring and recommend changes for improvement
  • Proactively conduct research of Realogy network traffic and system activity looking for security anomalies and suspicious activities
  • Perform Advanced Persistent Threat correlation between multiple security event sources such as firewall logs, threat intelligence feeds, AV, IDS, IPS, and Carbon Black Enterprise Response
  • Responsible for tuning and implementing configuration changes related to IDS/IPS, endpoint security, SIEM, and other tools as necessary
  • Responsible for in-depth reviews of log files and using this information to identify security events
  • Provide mentoring to other members of the Security Operations Center team

MINIMUM QUALIFICATIONS:
  • Bachelor’s degree in technical engineering or IT related field and 5+ years of experience in a large scale, complex, high performance network.
  • 4+ years of experience working with a Security Incident & Event Management (SIEM) to correlate events across several devices
  • Strong understanding of network devices such as Intrusion Detection Systems (IDS)/ Intrusion Prevent Systems (IPS), firewalls, network packet capture tools, and file integrity monitoring tools
  • Expert level knowledge in incident prevention, detection and response tools such as Carbon Black
  • Extensive knowledge of network and server security products, technologies, and protocols
  • Requires background in at least 2 of the following domains: hacking and incident response; network forensics; security engineering, networking protocols and data center; security analysis and investigations
  • Security certifications (CISSP, CISM, GIAC certs) preferred
  • Strong problem-solving skills, critical thinking, excellent analytical ability, strong judgment and the ability to deliver high performance and high levels of customer satisfaction in a matrix managed environment.

Comments

  • TheFORCETheFORCE Member Posts: 2,297 ■■■■■■■■□□
    I think you are already there based on your knowledge, you just haven't done that type of work, meaning, working with those particular tools. When i go to interviews and they ask for experience with y tool and i only have experience with x tool, i tell them; All these tools provide the same results and at the end of the day you are looking for information, understanding how the information can be retrieved is irrelevant of the tool, the tools just have different ways of getting the information, that can be taught and learned but you can not teach someone the flow of logic that comes with experience. Considering your experience with networking and firewalls, i'd say you are already there. What these other tools do is just aggregate the logs from multiple sources.
  • MitMMitM Member Posts: 622 ■■■■□□□□□□
    Thanks for the confidence booster bud, you make a good point. I'm just looking to sharpen my skills even more. I'm real familiar with Palo Alto firewalls, since I'm on them daily. There's a lot that I don't get to see at my company. We looked into deploying a SIEM, but management pulled the plug, since they didn't feel we have the staff for it. There was talk about deploying a dedicated IDS/IPS on the network but so far nothing. My guess is not this year either.

    I'll keep working on the CISSP for now. Maybe I'll look into a course from SANS too later on, they are just so expensive. There really doesn't seem to be a poor mans alternative :D
  • YFZbluYFZblu Member Posts: 1,462 ■■■■■■■■□□
    if you're interested in network security monitoring, you should consider reading Bejtlich on the subject. A great compliment to that would be Skoudis for the incident handling piece. These are older texts but are classics in the field and lay a groundwork on which you can begin to think for yourself and help drive security-related decisions in terms of monitoring and response strategy.

    In terms of SANS training, keep an eye on the Work Study program - it would allow you to attend SANS training at a discount. SEC503 (Intrusion Detection In-Depth) and FOR572 (Advanced Network Forensics and Analysis) would put you on the right track as far as the network-based detection aspects are concerned, along with important methodologies related to identifying compromise and log management. Some may disagree with me, but if your goal is to get into more of a network defense role, my advice is to put the CISSP studies on hold and begin to dive in to these areas of network defense.
  • gespensterngespenstern Member Posts: 1,243 ■■■■■■■■□□
    That's nothing special actually. Know windows internals on a good level, not sure if you can have a trial version of carbon black to play with it, again, it's nothing special. There are many good free videos on incident response on youtube and vimeo and other sites.

    Would advise against Bejtlich, material is old and Bejtlich himself is old and managerial. Not sure if he ever was good.
  • YFZbluYFZblu Member Posts: 1,462 ■■■■■■■■□□
    Would advise against Bejtlich, material is old

    If you haven't noticed, NSM is old. It's also the most well-established facet of Infosec. Its foundation is TCP/IP, which is old...yet here we are. The Tao is exactly that - a set of principles to work from. IT organizations all over the world are propping up half-baked implementations of commercial products without vision or a proper strategy. Knowing how to actually think about network security and monitoring is what matters.
    and Bejtlich himself is old and managerial. Not sure if he ever was good.

    I mean, I personally don't care that you think this way. It should be noted though, this is just really bad advice to give someone looking to enter the field.
  • gespensterngespenstern Member Posts: 1,243 ■■■■■■■■□□
    Yeah, let's advise him to learn in detail how charge carriers are used to carry electric charge. Cause that's the foundation to work from. I recommend Charles-Augustin de Coulomb's book on this topic, that's where everyone should start.

    On a serious note network monitoring is outdated, especially after 2013 Snowden revelations, because more and more protocols get encrypted and work over this or that encryption protocol, so the tools don't see much. Even malware I see these days relies on encrypted protocols, unless it is extremely dumb, making network monitoring useless.

    The same goes for network DLP as well, as almost everything gets uploaded over encrypted channels so if you didn't catch it on a host -- good luck "monitoring" it on the wire.
  • YFZbluYFZblu Member Posts: 1,462 ■■■■■■■■□□
    The same goes for network DLP as well, as almost everything gets uploaded over encrypted channels so if you didn't catch it on a host -- good luck "monitoring" it on the wire.

    You seem to be assuming that I don't believe in a balanced approach to monitoring, or recognize that NSM has its own set of limitations like every other monitoring technique does. The fact of the matter is I'm more interested in host-based monitoring than I am in pure NSM. For example I wrote and maintain two forensic tools currently bundled with the SANS SIFT kit (1, 2); both of which rely on host-based forensic artifacts - along with several other non-public tools I use when collecting and processing forensic artifacts at scale. I raised the topic of NSM for two reasons:

    1. The job posting referenced by the OP specifically references network monitoring and IDS/IPS
    2. The job posting referenced by the OP talks about security strategy, which Bejtlich covers as well as any author I'm familiar with
    more and more protocols get encrypted and work over this or that encryption protocol, so the tools don't see much.

    NSM toosl 'don't see much' - except who talks to who, when, for how long, volume of data in/out, protocol metadata, etc. You know, important details one would like to have when identifying how data flows through a network - and when scoping the relationships between targets and threats.
    Even malware I see these days relies on encrypted protocols, unless it is extremely dumb

    This way of thinking is poor - assuming that a given malware sample is "extremely dumb" because the author implemented unencrypted communication channels has me questioning how many active incidents you've managed in recent years beyond commodity malware. Good attackers don't do elite things for the sake of being elite - good attackers, among other things, are capable of elevating to meet a challenge and will do what it takes to accomplish an objective. Tying a lack of encryption to sophistication only puts you at a disadvantage.

    For example, in 2016 two of Mandiant's incident response analysts (Matthew Dunwoody and Nick Carr) made the rounds to various security conferences giving a talk called 'No Easy Breach' in which they detailed a months-long engagement with a highly skilled nation-state attacker. In terms of the malware being used in that case, C2 wasn't encrypted until the third iteration of the backdoor - and even then, NSM played an important role: Bro's SSL log helped identify that an interesting cipher was in use, recorded the email address associated with the key, helped identify the use of a self-signed cert, etc. Additionally, after the attackers began employing anti-forensic techniques (timestomping), "network time" became an important factor in correlating behavior across the environment when compromised hosts couldn't be relied on for accurate temporal data.

    The 'No Easy Breach' case is obviously just one arbitrary example, but I think it does a good job of highlighting the importance of a well-balanced monitoring apparatus and the importance of checking one's bias at the door with regard to perceived attacker sophistication.
  • gespensterngespenstern Member Posts: 1,243 ■■■■■■■■□□
    YFZblu wrote: »
    You seem to be assuming that I don't believe in a balanced approach to monitoring, or recognize that NSM has its own set of limitations like every other monitoring technique does. The fact of the matter is I'm more interested in host-based monitoring than I am in pure NSM.
    That's nice, then let's advise the TS to concentrate on host/network facets with roughly 85/15 ratio instead of advising reading outdated books, especially considering that the importance of host-based monitoring grows over time while network monitoring role gets less important.

    I'm not saying that network monitoring is totally not important, yes, as you mentioned it is useful for at least getting some metadata on network activity, but these days this role is more of supplementary role as opposed to times when Bejtlich was still doing something technical.
    YFZblu wrote: »
    I raised the topic of NSM for two reasons:

    1. The job posting referenced by the OP specifically references network monitoring and IDS/IPS
    2. The job posting referenced by the OP talks about security strategy, which Bejtlich covers as well as any author I'm familiar with
    That's nice, but the JD seems to ask about host-based IR experience first and mentions specific products, which makes sense, considering the roles of host- and network-based forensics have these days. I'm not against reading Bejtlich if someone wants to, like you, but one should keep in mind the fact that I described above -- it's far from being as useful as it was when Bejtlich was writing technical stuff.
    YFZblu wrote: »
    NSM toosl 'don't see much' - except who talks to who, when, for how long, volume of data in/out, protocol metadata, etc. You know, important details one would like to have when identifying how data flows through a network - and when scoping the relationships between targets and threats.
    I.e. just metadata, as opposed to metadata (which is less important than the data in majority of cases, and easier to get) + data which was available for vast majority of traffic when Bejtlich was still doing technical stuff. Nowadays the role of network monitoring shrank considerably due to ubiquitous encryption.
    YFZblu wrote: »
    This way of thinking is poor - assuming that a given malware sample is "extremely dumb" because the author implemented unencrypted communication channels has me questioning how many active incidents you've managed in recent years beyond commodity malware. Good attackers don't do elite things for the sake of being elite - good attackers, among other things, are capable of elevating to meet a challenge and will do what it takes to accomplish an objective.
    Cool, Nice ad hominem pitch, I can understand where it's coming from, LOL.

    Okay, you are right, sweeping statement on dumb malware was not entirely correct, so you had your chance. Correct statement would be vast majority of advanced malware relies on encryption to avoid detection/prevention on NIDS/NIPS level, while dumb malware, targeted primarily to infect home users, shows less encryption in network traffic.

    Example, since you were asking, would be Dridex banking trojan, I managed IR of early last year, for which I developed scripts for detection and removal usable on a large scale as anti-malware solutions were useless at the time as only a few on VT managed to detect it and did it using their heuristics engines (less robust) as opposed to signatures.

    AFAIR, Dridex relied on encrypted traffic over TCP 8443, which was open as it is a rather common alternative TCP port for TLS. It didn't use DGA or predefined hosts though, as random-looking DNS is rather easily detectable and thus can alert the watchmen, it used a set of predefined IP addresses. As I've read in some reports (can't verify that from my experience) these were other Dridex hosts that managed to infect computers with public IP addresses and therefore were capable of accepting traffic from other Dridex infected hosts calling from behind NAT. Was this metadata useful? Did it help in responding to this incident?

    Yes, as at the end it gave me times and source hosts trying to establish C&C connections AFTER we were alerted by a host-based tool. If they had carbon black even this information wouldn't be as useful as we'd just query carbon black for a list of hosts with IoCs I established and instructed them to clean it or shut down the host.

    Other than that -- nope, the scope was this particular local IR as opposed to trying to use this metadata for possible attribution or even counterhack activities in attempts to reach the source of the C&C network, something what folks in FireEye could do or be interested in.

    Plus, network monitoring wasn't robust as without full-blown static analysis which takes time we can't be sure about C&C timeframe patterns, who knows, it could be that under certain circumstances it is instructed to call back after a while, or an infected host may be down for whatever reason, so the only robust way to detect and clean it was relying on host-based tools.

    What was the overall weight of network monitoring in this case compared to host activities? Probably around 5%.

    That's what I would advise TS to remember if he takes your advice on reading outdated books on outdated approaches. Remember, that the most of this info is perfectly applicable, but its role in modern IR is 5-15%. The rest is host-based, so you'd better spend most of your time learning host-based, carbon black for example, which is mentioned in JD specifically.

    And I agree that a hammer has its use in modern manufacturing. The question is though what's the role of a hammer in modern manufacturing as opposed to 19th century manufacturing. I'm arguing that it's not as high as it was and one shouldn't invest much time learning hammering, otherwise they risk their employment chances.
  • YFZbluYFZblu Member Posts: 1,462 ■■■■■■■■□□
    I'm not saying that network monitoring is totally not important, yes, as you mentioned it is useful for at least getting some metadata on network activity, but these days this role is more of supplementary role as opposed to times when Bejtlich was still doing something technical.

    This is a much more reasonable statement than what was said earlier when describing NSM "outdated" and "useless". I think that type of thought process is far too dismissive. Still, I think you're minimizing the importance of network-based metadata too severely. NSM is not just about identifying assets communicating with bad things on the internet. It is impossible to know what will be most important during an investigation before an attack takes place. A large advantage of NSM is in its ability to passively scope an enterprise in preparation for an incident. Understanding how an enterprise 'talks' cannot be understated. Otherwise you're at the mercy of assuming what is going on network-wise based on how IT thinks it should be talking. Given that the vast majority of IT operations fail to keep accurate and up-to-date network documentation, taking on this assumption will put you in a bad spot.
    That's nice, but the JD seems to ask about host-based IR experience first and mentions specific products, which makes sense, considering the roles of host- and network-based forensics have these days.

    And because a specific host-based product was listed, I didn't make an attempt at recommending anything. It's right there. Conversely, the job listing also contained requirements like "IDS" and "tuning" and "engineering" and "strategy" - these are more ambiguous terms to someone who isn't familiar with them in a security monitoring context. Bejtlich's older work still covers that material in one stop better than any author I'm familiar with.

    OP: I'll also point out that Bejtlich authored a more recent work (2013) on the subject of NSM. It's more focused on the practical hands-on-keyboard aspect of NSM and would be a great compliment to 'the tao' I referenced earlier
    Okay, you are right, sweeping statement on dumb malware was not entirely correct, so you had your chance. Correct statement would be vast majority of advanced malware relies on encryption to avoid detection/prevention on NIDS/NIPS level, while dumb malware, targeted primarily to infect home users, shows less encryption in network traffic.

    This is also a more reasonable statement than before, but in my opinion it is still distracts from the larger point: generally speaking, advanced attackers would prefer to use as little malware as possible. Sometimes encryption will be used, sometimes it won't. It just depends. The presence of encryption alone has nothing to do with attacker or malware sophistication. The entire picture of a sample should be examined, not one aspect. This applies to both structured threats and commodity malware campaigns that affect a wider audience.
    AFAIR, Dridex relied on encrypted traffic over TCP 8443, which was open as it is a rather common alternative TCP port for TLS.

    You remember correctly - Dridex did indeed communicate over alt-SSL after a host had been compromised. I have no doubt you were successful in assisting an organization with Dridex using important and thoughtful host-based techniques. It should also be recognized that Dridex campaigns also communicated in clear text before a host had been compromised:

    1. An email would enter the environment. Ideally, SMTP would be monitored with an IDS capable of inspecting and recording attachment data. As the Dridex campaign entered the environment, and IDS could alert the monitoring team to the presence of a macro-laced Office document being delivered. Also ideally, the IDS in question would be able to download and store the attachment locally - where automation could take over to extract the macro for review by an Analyst and/or submit the document to a localized sandbox. (The sandbox would have its own automation agenda).

    2. Given that SMTP data is now in hand a team can begin searching SMTP logging to identify the scope of affected mailboxes. SMTP logging could be provided by the IDS or by searching the mail gateway or the internal SMTP environment itself. Either way at this point it is now possible to purge emails from affected mailboxes.

    3. In the event that a User immediately opened the document and the macro in question ran, a plain text HTTP request would be made for the Dridex payload. The IDS would be looking for PE file downloads at the network level - preferably storing copies of interesting / less common PE downloads locally for review and automation purposes.

    A simple situation like this is where NSM demonstrates its flexibility. While it is important for an operations group to identify known-bad with network and host-based signatures, it's as important (increasingly moreso) for an operations group to identify new bad. NSM helps facilitate that strategy by identifying an entire class of email-based threats. Most importantly, in concert with host-based logging and detection mechanisms, NSM helps enable an organization to identify empirical root cause; painting a complete picture of what took place.
  • xxxkaliboyxxxxxxkaliboyxxx Member Posts: 466
    I bench more than both of you combined....
    Studying: GPEN
    Reading
    : SANS SEC560
    Upcoming Exam: GPEN
  • MitMMitM Member Posts: 622 ■■■■□□□□□□
    A nice healthy debate here. Thanks fellas. I will check the reference links. I'm still researching what area of security is ideal for me
    I bench more than both of you combined....

    LOL
  • lostsollostsol Member Posts: 18 ■□□□□□□□□□
    Great debate. I've taken FOR572 and thought it was great. I was considering SEC503 for my next SANS course, but after reading these posts and taking into account that more and more traffic and obfuscation is over SSL, I am debating on whether I should take 503. 503 seems like a great fundamentals course, and there are many times when I jump ahead and learn new technologies but lack the fundamentals. Still recommend it? Does anyone have any other recommendations for a SANS course that would help with network monitoring and security? I saw a SIEM class, SEC555, is now being offered, but no certification tied to it yet.
Sign In or Register to comment.