Knowledge Base

Bad Rabbit, Not Not Petya. Another Ransomware Outbreak!

November 15th, 2017

If that headline makes no sense, you may not have been following CyberSecurity news closely enough! The weeks since October 24th have seen 2017’s 3rd major propagation of a piece of Ransomware, as a modified version of Not Petya, named Bad Rabbit, has spread throughout much of Europe and Asia. The major path for distribution of the malicious code looks to be via a fake Flash update, which appears as a pop-up on legitimate websites that have been compromised. This is an effective attack vector, as users will typically be far more likely to accept such a download from a previously trusted site, than in an email from an unknown sender.

Bad Rabbit appears to have been coded by fans of pop-culture, as the name is a play on Director JJ Abrams’ company Bad Robot, and the code contains the names of three dragons from the immensely popular series Game of Thrones. Taking a queue from Hollywood that they would likely complain about, Bad Rabbit appears to be largely just a reboot of Not Petya, and WannaCry; analysis by Crowdstrike has determined Bad Rabbit to be nearly 70% identical code to its predecessors.

The most interesting difference between Bad Rabbit and the two major outbreaks earlier in the year is that Bad Rabbit appears to discriminate in who it infects. Where WannaCry and NotPetya would infect any environment into which they were execute, Bad Rabbit appears to be able to determine the potential value of users who land on a distribution page, prior to launching an attack. Attacks directly distributed against several specific networks are also suspected in the spread of Bad Rabbit.

The ransomware seizes machines it can access, spreading laterally across an environment by brute-forcing password combinations, encrypts accessible data, and pops-up a familiar message demanding payment in Bitcoin for the release of company/user data. The good news is, since it was identified early, and largely familiar, most organizations have quickly released updates and patches to help block the spread of Bad Rabbit.

CIS’ advises review and deployment of any critical security updates be done regularly as a Best Practice, to ensure the latest protections. While Bad Rabbit does not utilize a known exploit like Eternal Blue, most of the victims in that attack were only exploited because they lacked current updates and patches that had already

Additional information for CIS’ Managed Services Clients who are operating Webroot and Sonicwall:

Webroot : https://community.webroot.com/t5/Announcements/Webroot-Protects-against-Bad-Rabbit/td-p/304856

SonicWALL: https://www.sonicwall.com/en-us/support/knowledge-base/171025100215963


Why Patching Really Matters

November 9th, 2017

From cleaning the gutters to re-sealing the deck every two Springs to getting your oil changed, nobody enjoys the routine maintenance that comes with life. The same is true for many Administrators and ongoing maintenance, particularly when it comes to Patches.

Manufacturers issue patches for two primary reasons: features and security. Feature patches provide new options for users that can provide additional value to the originally purchased software. While it is always a Best Practice to keep software within a version or two of current-release, Feature patches are mostly non-critical. Security patches are issued to close known exposures in software and operating systems, after they have been discovered. These should be treated with paramount concern, but can frequently fall by the wayside as items that are perceived as more important take precedence.

Zero-Day Attacks are difficult and infrequent; exploiting known vulnerabilities is easy!

It’s true, while Zero-Day Attacks are frightening, they are very difficult to architect. Think of Zero-Day Attacks like a heist pulled by the Oceans’ Eleven crew, high-profile, well planned, and expertly executed, taking significant time and resources. Such heists are extremely rare, even in the movies, as they are simply impractical. By comparison, exploits of known vulnerabilities is a far easier and more common approach.

A typical “Black Hat” will spend time knocking on the virtual doors of numerous companies and government organizations, looking for anyone foolish enough to have left vulnerabilities unpatched. Once identified, these organizations can be breached within seconds, utilizing common toolkits, following well-publicized attack vectors, allowing for maximum damage. Simply applying Security Patches as they are released will dramatically reduce exposure to this type of attack.

What’s the frequency?

Patch Management should be handled on a multi-tiered schedule, with daily, weekly, and monthly reviews and scheduled patching. Daily, available patches and security bulletins should be reviewed, and anything critical should be applied. Weekly, non-security Feature Patches that provide valuable functionality should be reviewed and applied as needed. Monthly, a review of all available patches should be conducted, and any remaining items should be applied. Additionally, it is a good idea to review Security Logs, to see where attempts at access occur, to better protect those areas proactively. This can become quite cumbersome for an internal team to tackle on their own. The average IT group is already understaffed by more than two people, with most resources juggling too many tasks in too few hours.

How do I alleviate the stress on my IT team, while keeping up with critical Security Patches?

CIS conducts regular Patch Management as part of a Managed Services Program, while providing a team of experts to deploy, implement, integrate, customize, secure, and support solutions of all types. Clients utilize CIS to offload the monotonous task of handling things like patching, allowing an internal team to focus on the more critical aspects of their job. CIS resources will monitor, review, and deploy any critical patches, with a customized approach to each client’s individual needs and requirements.


End-Users & IT Security – the Weakest Link

October 18th, 2017

When thinking about end-users and IT Security two related things leap to mind; the first is an early 2000s gameshow, the second an adage about chain strength. Both serve to say the same thing, when it comes to network security, your end-users ARE your weakest link.

Regardless of the amount of money spent on firewalls, intrusion detection, log management, compliance, and network vulnerability testing, security surveys are unanimous that end-user actions are directly responsible for more than 80% of global security breaches. Whether direct or indirect, intentional or accidental, end-users are reliable only in that they expose their employers to massive security risks. Because of this, diligence is required in making sure your users understand the corporate risks inherent to a data breach, and what they as individuals and as a collective can do to prevent such unauthorized access.

End-user focused security services have become of paramount concern in the age of the cloud and remote-worker, there are simply too many paths open for an industrious hacker to exploit. Computer Integrated Services works with clients to help identify the key users and topics of concern, then conducts end-user security seminars, focused on education and awareness of best practices for security.

“But a lot of my users are Millennials, they were born online, they know this stuff”

While that may be true of an increasing number of companies, the truth is that a large portion of the workforce is still comprised of the opposite end of the user spectrum, aging users who are not generally inclined to keep up with changes in technology. And, often, these are your power users, key executives with the most access and the most privileged information.

The first step CIS’ elite Network Security Team recommends is to conduct end-user testing from several vectors. It is critical to identify potentially troublesome users in your environment. This identification allows CIS and our client’s IT staff to provide guidance to specific users, as well as attempt to establish technology barriers to help protect them.

Testing also serves as a guideline for focusing the development of CIS’ Security Seminar program, a customized end-user focused presentation, or series of presentations, from CIS’ Chief Information Security Officer. Training seminars are conducted in groups of users, typically coordinated by need and level of capability, to allow for focused learning, as well as efficient use of time.

Seminar material varies from client to client, depending on the needs of each specific user base, as well as the continual emergence of new network security threats. Seminars are typically best when they are engaging and interactive, so CIS always encourages questions within the covered topics. Follow-up materials such as best-practices reminders, written tests to review and reinforce the subject matter covered, and follow-up one-on-one sessions are also typically provided on an as-needed basis.

Topics covered will typically include:

  • Best Practices for Password Management
  • Best Practices for Web Browsing and Applications
  • Best Practices for Mobile Devices / BYOD
  • What is Malware, Adware, Ransomware, etc. and How to Not Infect Your Company
  • Social Engineering – Targeted Awareness: Phishing, Spear-Phishing, Shoulder-Surfing, Dumpster Diving, etc.
  • Cloud Storage and Access – Best Security Practices for the Public Cloud
  • Remote Access and Connectivity
  • Encryption, Data Security, Data Destruction, and Compliance (Internal and/or Various Regulatory)

Regularly covering these topics, and more, and staying diligent about end-user education is the only path toward real security. Users are notoriously demanding of any IT Staff, it’s time to be demanding back; IT Security should be every employee’s concern.

For further details, or to schedule a CIS Security Seminar, please contact us today.


Is the KGB running your anti-virus?

August 31st, 2017

Well, probably not, but that could largely be due only to the fact that the KGB was disbanded in 1991. The current iteration of Russia’s State Security organization is known as the Federal Security Service of the Russian Federation, or FSB; this is the organization alleged to be utilizing ties to Kaspersky Lab for nefarious purposes.

Kaspersky Lab is the 4th most widely adopted anti-virus platform in the world, and holds the largest market share of European cyber-security manufacturers. With over 400 million users added since the company was founded in 1997, Kaspersky is a very large player in the global security space. Which makes dire warnings about the company’s product line, such as those issued by the U.S. Cybersecurity Coordinator, Rob Joyce, at the end of August, extremely concerning.

“I worry that as a nation state Russia really hasn’t done the right things for this country and they have a lot of control and latitude over the information that goes to companies in Russia.” – Rob Joyce, U.S. Cybersecurity Coordinator

Following Joyce’s commentary about the security of Russian companies, he continued to say that he would not recommend Kaspersky Lab products to family and friends, further confirming an official stance by the U.S. government that Kaspersky products are not to be trusted.

Suspicions about Kaspersky Lab have been abundant in tech and government communities for several years now, persisting despite strong denials by Kaspersky Lab and its founder, Eugene Kaspersky. Kaspersky, acting as CEO of the company, has gone so far as to offer source code for his security products for independent review; an offer which has yet to be accepted by any government organization.

A great deal of the suspicion in this case is directed at Kaspersky himself. A member of Russia’s elite, Kaspersky was educated in a KGB-connected University, and maintains many connections to high-profile figures in Russian government and national industry. The Russian government, and this community of oligarchs, has repeatedly and publicly made attempts to exploit Russian companies for their own benefit, both legally and illegally. There is a growing fear that even if Kaspersky Lab is not willingly cooperating with the Russian government, they may be otherwise compromised.

Recent news reports have confirmed that, throughout the summer, the United States Federal Bureau of Investigation has been meeting with U.S. energy and technology sector companies, to quietly advise them to remove all Kaspersky Lab products from their systems. Additionally, all products from Kaspersky Lab will no longer be utilized by any branch of the U.S. Federal Government. This is an extremely aggressive step which we believe the government would not have taken without careful consideration, as it has potentially broad impact.

At this point, CIS is recommending a highly cautious approach to this situation, and advising our clients to follow the lead of the U.S. Government, and begin to remove Kaspersky Lab products from any critical systems. Additionally, we advise that any current Kaspersky Lab clients either conduct a Network Vulnerability Assessment, or, at minimum, run network security scanning tools. Our team of security experts is available to discuss your requirements at any time, and make recommendations for alternate technology from more trustworthy organizations. If you would like to coordinate a call with our team, please contact your CIS rep today.


25 Best-Practices Tips for Security and Administration for the SMB

August 28th, 2017

Running the entire IT Operation for even the smallest business can be a tremendous challenge. Between keeping up with new technologies and threats that emerge on almost a daily basis, to handling licensing and budgeting, to squeezing extra life out of stubborn aging equipment, to handling an end-user community that may be “less than computer-savvy,” one wonders where the hours in the day for sleep can be found. And, in our experience, this is frequently not the only job the “Computer Person” has at their organization! Many people in a solo support role have ended up there simply through taking on various technology-related tasks. As their company grew over time, their responsibilities increased to the point where they are doing at least two full-time jobs.

CIS has worked with the small and medium sized business community throughout the company’s 22-year history.  The organization provides both monthly managed IT services as well as IT support blocks, both of which lead to directly working with many individuals who are that sole “IT Person” for their organization. Through this experience we’ve come to recognize certain common challenges and things left undone from client to client. Typically these are issues that a larger staff, or a support organization, can handle as priority items. For the small and medium sized IT person, they can typically be found on the “if I had more time” list.

The following suggestions are 25 quick recommendations we’ve provided to countless Small and Medium Sized Businesses to improve Security, Policy, Administration, and Stability.  CIS can provide direct support for any or all of these efforts, and many more, with an ongoing managed services agreement, or can provide staff augmentation and consultative services for clients that prefer to handle things mostly internally.

General Recommendations

To get us warmed up, here are a few easy logical IT practices that everyone should follow

  1. Consider implementing a unified security management platform, such as Alien Vault

Unified Security Management provides a “single pane of glass” view into your organization’s network security, asset inventory, vulnerability, intrusion detection, behavior monitoring, SIEM, and log management, dramatically reducing time-consuming tasks such as log reviews, and condensing everything into easily understood reports that can be immediately acted upon.

  1. Change all passwords at least yearly, including SANs, switches, wireless, DNS etc. Never let only one person have access to the passwords.

This may seem obvious, but the vast majority of devices end up in “set and forget” mode, leaving them vulnerable to brute-force and phishing attacks, as well as breach from a current or former employee. Where applicable, have passwords that are a phrase, the more complex but memorable the better.

  1. Think of security in layers. Protect every layer. Think like a hacker.

A thought straight from the oft-quoted The Art of War, if you don’t understand your “enemy” – in this case hackers of all types as well as, unfortunately, internal threats – you cannot hope to defeat them. If you approach your layering of network security not from an internal place of comfort, but from an external place of seeking access, you will be on your way to thinking like the enemy.

  1. If you don’t know something well that is very complicated, pull in resources that are experts and work as a team. It can save time and money and ultimately be in your best interest. Google can help a lot, but having someone that knows exactly what to google is significant.

Nobody knows everything, not even our team, but sometimes the hardest thing to do can be to ask for help. We’ll just leave this here:  Contact a CIS Rep Today

  1. Have documentation of your environments.

A well-documented environment is a well-protected environment. This is a crucial but often forgotten step in the disaster recovery planning process. Backups and the ability to spin up virtual or even physical servers are great, but good documentation is the roadmap on how to get from hopelessly lost back to functionality.

  1. Have a DR plan, even if it’s only a few informal thoughts.

What would happen if your primary production server crashed? What would happen if the office lost power for a week? If you can answer basic questions like these, you’re on the way toward disaster readiness.

  1. Which reminds us: “A backup can get you out of any jam.” Always have a backup.

Have a backup. Use your backup. TEST your backup. Sending your data offsite is a great start, but how long does it take to bring it back, stand up a new server, and get everything running? Know your Recovery Point Objective (RPO) and Recovery Time Objective (RTO), the point in time to which your business must recover, and the time it can tolerate it taking to get there. Be sure that you have full image based backups. If there is anything important, have a backup. Have at least one backup offsite.

  1. Write every email with the expectation it could become public. If you aren’t expecting an email, assume it’s a scam. Have processes in place to verify requests like wire transfers over email.

As Harold Melvin and the Blue Notes sang, “If you don’t know me by now…”.  We could list endless examples of company breaches based on one user clicking a bogus link in an unknown email, corporate messages landing in the News, or payments being sent to phony vendors based on invoices that “looked real to me.” Educating yourself and your end users about the looming threat posed by everyday email is critical.

 

Network:

  1. Run network and security scans on a regular basis

CIS recommends running a network vulnerability assessment on at least a yearly basis, if not quarterly. Our team’s offers NVAs that are geared toward the Small and Medium Sized Business, designed specifically for affordability and effectiveness. Additional security can be gained by running more regular reports with tools such as Network Detective or Nessus. Management reviews of security reports should be undertaken on at least an annual basis, to provide visibility for issues potentially impacting compliance and finances.

  1. Make sure firewalls are up to date on firmware

A wall with a hole in it is not a wall at all.

  1. Make sure all VPNs, both site to site and client based use better encryption settings such as AES256

Stronger encryption standards yield more secure communications. If you can use the same encryption as a federal agency, why wouldn’t you?

  1. Check on firewall WAN to LAN rules; ensure only what needs to be allowed through is.  If all traffic outgoing is allowed, consider limiting it.

Limiting traffic is a great way to manage bandwidth as well as security, only allow business-related traffic to flow.

  1. If services like SSH are enabled, consider limiting who can connect, and putting it behind a VPN.

Exposing the network to remote access and control can be a dangerous proposition, unless it is well implemented. Restrict access to only those who absolutely need these services. Putting SSH behind a VPN provides an additional layer of security.

  1. Look at logs for login failures, brute force attacks on public IPs are rampant. 

Hackers are knocking on the virtual door every day, it’s best to keep an eye on them and monitor any suspicious traffic. If patterns emerge, it’s best to consider if attempts are random or targeted and more nefarious.


 Administration & Policy

  1. End of life operating systems, applications and hardware need to be replaced.

This is the easiest path into any environment. If someone looking to breach your environment finds an end of life OS, it’s Game Over. End of Life systems receive no security patches, no updates, and no support from vendors, leaving the business at risk. Even a system as recent as Windows 7 is already in “extended support” and should be updated.

  1. Patch and have patch policies and reports for all operating systems, but also for 3rd party apps. Also have patch cycles for additional equipment, like printers, wireless access points, switches etc.

Patching is so simple and so frequent that it has become a mundane part of the routine, one that’s easy to ignore or postpone for something more interesting. Unfortunately, patching is the one surefire way to stay up to date with known vulnerabilities. CIS strongly recommends monthly patch review and deployment; systems that were patched on even a quarterly basis were impacted by WannaCry. Any device that has an IP address on the network is vulnerable; maintain the latest patches and updates to stay a step ahead.

  1. Change standard user names like administrator or admin to something nonstandard.

You’re not President Skroob, the password for your network – or your luggage – should not be 1-2-3-4-5! Don’t make it easy on someone looking to breach your environment, as a standard practice and written policy, change standard or default usernames and passwords to something non-standard. The frequency with which we see “admin/admin” credentials is astounding.

  1. Look at domain security policies and group policies.  Ensure basics like passwords must be changed every 90 days and machines lock out after 15 minutes of inactivity are in place.

Basic security administration can be accomplished with simple policies such as these. If an employee is leaving their workstation for 30 seconds for coffee refill, typically, it is fine to leave their computer unlocked, but if they’re gone for 15 minutes? An open computer leaves both data and privilege open to anyone who happens to pass by. Rotating passwords, no matter how much users may grumble, is simply basic entry-level security.

  1. SSL with web servers and services should be a standard, http should be phased out.  Older versions of SSL like v1 should be disallowed, and new SSL certificates with stronger encryption settings should be standardized.

SSL, or Secure Sockets Layer, is the current standard for secure access between web browsers and web servers. Http is antiquated and no longer secure, it must be replaced.

  1. Database information should be encrypted at rest and in transit.

Typically database information is considered critical company data, it is where you store information about clients, projects, and vendors. Most organizations protect database information while it is at rest by deploying encryption, however as the workforce changes, more users access database information remotely. If a user’s application exists outside of the server on which the database resides, the data will be in transit while it travels to the user. It is critical that this communication maintain the same level of encryption as when the database is at rest.

  1. Review AD groups like administrators, domain administrators, enterprise administrators periodically. Limit what accounts have that level of access as much as possible. Review local admin accounts regularly.

The best approach to access is to provide a Least Privilege model, giving users only the access they need to do their jobs, and nothing more. No user should have access to systems they have no need for, and no users should have access in overlapping systems such as accounts payable and accounts receivable. Separation of Duties reviews should be conducted on a regular basis and enforced via policy and automation.

  1. Make a list of all workflows and servers.  Consider moving more workloads to cloud services at least when existing hardware goes end of life or support, such as old SANs.

Tracking what workflows reside on what servers is critical in supporting an environment. A proper assessment of the risk to any critical business application must include thoughts about the hardware or server environment in which it resides. As hardware goes end of life, tremendous benefits can be reaped from migrating its applications to cloud-based virtual servers.

  1. For all admin and service accounts, maintain an access control list of where those are used.  That will making changing those passwords regularly much easier.

Documenting things makes process easier to implement. Any documentation of passwords should be stored safely.

  1. Check AD health regularly.

Active Directory is the backbone of your network environment, it should be kept up to date and healthy. CIS recommends a Active Directory management utilities such as DRA from Micro Focus, to help with regular AD administration.

  1. Have alerts and monitors in place for critical services, servers, network equipment and monitor them.

Monitoring server thresholds and utilization, network equipment online status and access attempts, and many more things going on in any environment is a key component of successful management. Without monitoring and the associated alerts, we would constantly be putting out fires, rather than handling things proactively.


It is Past Time to be Open to Windows Server 2016

August 15th, 2017

Nobody likes upgrades or changes to back-end infrastructure that, on a typical day, just works.  Such is the life of the server.  Server operating systems are typically a “quoset and forget,”
with the knowledge that this group of servers runs on that flavor of Windows, which will be upgraded at the same time as the hardware.  However, there are compelling reasons to consider an upgrade to Windows Server 2016.

The MOST pressing concern for most IT Departments we work with is Security.

One of the primary vulnerabilities CIS’ elite Security Team uncovers on a regular basis is the continued use of unpatched, unsupported, aging software and operating systems.  Critical vulnerabilities emerge rapidly as products age out of their support model.  If you are currently running Windows 2003 at the server or Windows XP at the desktop; please stop reading and make the change, we will wait.  It’s that important.  For more “current” users who are running Windows 2008 R2, or even Windows 2012, we’d like to ask you to look at those years, and consider where you were at the time.  Now that we have a real feel for how long those products have truly been in the environment, we can understand the inherent security risk in areas like Access Controls and Privileges.  Upgrading Windows provides a chance to both utilize the new tools, such as JEA (Just Enough Access) utilities, and to utilize the opportunity to review who has access to what, and why, across your environment.

Providing a more efficient and reliable platform for applications

At long last, Microsoft has decided to adopt a Container model in their Windows Server product.  With a stated goal of making Windows 2016 a cross-platform operating system for a seamless Hybrid Cloud and On-Premises model, Containers are a hugely important new development in Windows Server 2016.

The idea behind containers is to “park” or “Dock” processes that typically chew on memory resources, segregating them from other processes in a bubble of sorts.  This frees all the other applications to consume required memory and resources, functioning more reliably and efficiently either locally or remotely.  Windows Server 2016 actually includes two versions of Containers, a standard Docker and a customized Hyper-V version.

Nano, Nano.

Ok, you probably read about or used 10 products already today that have nano in their name, but this one may be the most interesting innovation of all.  Included with Windows 2016, under the hood, is Microsoft’s new small footprint Operating System, known as Nano Server.  Nano server utilizes significantly fewer resources, currently able to run on 512MB of disk space and barely 300MB of RAM, and yields a staggering 92% fewer critical bulletins and 80% fewer reboots than typical Windows Server.

Nano Server is not a typical Operating System, it does not have a GUI or command line, it is intended entirely as infrastructure, to work with Hyper-V and in the Hybrid Cloud or Native-Cloud application space.  A single implementation of Nano Server with 1TB of RAM today can run 1,000 Virtual instances of Nano Server, an impressive feat on which Microsoft hopes to dramatically improve. As Microsoft puts it, Windows 2016 is here to Virtualize any workload, without exception.

We don’t work in the cloud, should we care?  YES!

We get it, the cloud isn’t for everyone.  It can be a daunting effort just getting there, and sometimes the learning curve for end users can be too steep.  This is why Microsoft made certain to pack Windows Server 2016 with On-Premises focused enhancements as well.  A primary change is that Windows Server 2016 includes up to 24 TB of RAM to run the resource-intensive applications used by most businesses.  Significant changes have been made to Hyper-V’s encryption  capabilities, access via PowerShell, and the ease with which modification of memory and network configurations are performed.

All of these changes are aimed at delivering a better experience to clients not ready to move to the cloud or hybrid cloud, while providing a platform from which to make, first, the step to Hybrid, followed by an enthusiastic leap to Cloud!

Server power equals business capability

Servers are the underappreciated backbone of a business.  The more powerful the server the more stable and better performing it will be.  By upgrading to Windows Server 2016, your end users will get the best possible experience while running their applications and workloads, and you will gain significant scalability and flexibility to meet the changing needs of your company’s growing dynamic business.

Interested?  Contact your CIS rep today!

CIS has helped clients of all shapes and sizes restructure their Windows Server environment, through most of the previous iterations of Windows.  The CIS team has unrivaled experience providing customized upgrade and migration experiences, utilizing proprietary tools and approaches, to ensure success.  CIS can provide consultation as well as hands-on integration and implementation work for any On-Premises, Hybrid, or Cloud Server Windows environment, get in touch with us today to start the discussion!


So Your Email is in the Cloud, Now What?

July 18th, 2017

The days of “the cloud” being a nebulous industry buzzword are over.  There are clear winners in the cloud services game and they are, as anyone would expect, Microsoft, Google, and Amazon.  Amazon AWS and Microsoft Azure are leading the field in cloud based server space, which is being utilized for everything from development to production to disaster recovery needs.  In the end-user facing arena, Microsoft’s Office 365 is far outpacing its rivals, with Google Apps a distant second, staking out territory primarily in the Not-for-Profit and Education markets.

To date, CIS has migrated over 250,000 client users or “seats” to the Microsoft cloud.  Typically email and collaboration services are the proverbial toe that clients dip in the water, testing if the cloud is really the right fit for their business, but the journey to the cloud shouldn’t stop there; email is just the beginning!

It has become imperative that any organization whose physical or virtual data infrastructure relies upon aging hardware strongly consider the Microsoft Azure or Amazon AWS when planning the next evolution of the server room or IT closet.  Myriad benefits can be reaped from migrating critical company data away from antiquated slow insecure hardware that requires on-site maintenance and care, as well as an expensive monthly outlay for power, cooling, real estate, and other such costs; to a modern, secure, reliable, disaster-proof, mobile platform.

Change can be revolutionary!

Imagine the following scenario: 

You manage IT for a 50-employee accounting firm in Manhattan.  As anyone in that position knows, in this context, “manage” means “completely run everything on a limited budget, with limited tolerance for IT requests.”  This is as tough a role as there is in the IT world.  With few dollars to spend, every decision becomes critical, and the need to squeeze extra life out of each product purchase is paramount; which is why you’re now running 10 critical production servers in a virtual environment hosted on eight year old non-warrantied server hardware with a similarly aged SAN on the back-end.  You’ve made a Capital Expenditure (CAPEX) request for new hardware each of the last 3 years, and it’s been rejected each of the last 3 years.  You know your backups run, but you’ve never given them a full failover test.  Any critical issue with your server hardware could mean the end of your job, if not the entire company.

In this scenario, the traditional method would be continuing to propose the same large CAPEX project to replace old server hardware with new server hardware, working with Dell, HP, EMC, or other such manufacturers.  The traditional method will be a short-term fix for some of your challenges, but you will eventually end up back in the same situation, while continuing to pay the overhead associated with real estate, power, warranties, etc.  And that’s only IF your CAPEX request is approved!

If you’re smart in your new hypothetical IT Management job, you will surely consider a cloud based solution, built on Azure or AWS.  Either solution will require a migration project, so there will always be a year-one request, but in this scenario the expense will yield revolutionary change in how your organizational IT costs and capabilities.

Just a few of the advantages of considering a migration to the cloud:

  • Servers and related costs become Operational Expenditures instead of repeated capital outlay
  • Cost savings for server/SAN including real estate, electricity, warranty, security, HVAC, and operational charges
  • Behind hundreds of millions of dollars’ worth of network security from Microsoft or Amazon’s infrastructure
  • Global server replication for disaster recovery and business continuity
  • Mobility and flexibility options for remote / at-home employees
  • Ease of management, monitoring, and collaboration
  • Predictable and flexible costs

A migration to the AWS or Azure platform can either move existing virtual servers directly in to the cloud, or setup new Windows Server 2016 servers in the cloud, with the data and roles transferred over to them.   As there is company growth and new servers are required, your hypothetical IT Manager character can provision them quickly and with minimal costs added to your monthly bill.

Similarly, temporary servers can be spun up then decommissioned for any specific project needs or “busy seasons.”  Going back to our avatar working at an accounting firm, for example, we can reliably predict that the firm’s computing needs are going to spike between January and May every year, and be far less demanding during the taxation off-season.  In a traditional server setting, it would be next to impossible for a small firm to repeatedly add server hardware and computing “horsepower” to the environment every year for only a few months.  However, by leveraging the cloud, you can create and eliminate servers at will, only paying for true utilization, and creating cost certainty as well as savings across the entire IT year.

Finally, through migrating to a fully cloud-based server environment, your firm gains significant mobility and flexibility.  No longer are users tied to their office or to an aging slow Citrix connection to critical data and applications.  Employees can work securely from anywhere in the world that can provide them internet access.  Similarly, leveraging these cloud offerings creates a server environment that is ironclad against localized outages or disasters in your office, city, or even region.  Both Microsoft and Amazon leverage redundant worldwide data centers, ensuring that your data and servers are always available.

CIS helps clients migrate email, data, and services to the cloud securely and successfully.  Available Managed Services programs further leverage the cloud to provide enhancements to performance, security, cost, and efficiency, translating to a better user experience and stronger KPIs for clients of all sizes.

At CIS, we see the future of Modern IT as smaller IT closets on premises, with cloud-based servers providing more secure and robust services, and affording all users the ability to work from anywhere, from any device, securely.

Want to make the journey to the cloud with us?  Contact your CIS rep, or click here to get in touch with us if we don’t already know you.  We look forward to the discussion!


How a Managed Services Approach to Ransomware Protection Would Have Blocked WannaCry

June 7th, 2017

by Nick Seal, Practice Director, Managed Services;  Terry McBride Sr. Sales Executive

 

Beginning on May 12, 2017, in more than 150 countries around the world, business operations for companies of all shapes and sizes ground to a halt.  Over 230,000 computers were infected in only a few days by a particularly malicious crypto-worm known as WannaCry.  Despite protections that blocked initial versions of the worm, even some of the largest corporate entities across Europe were eventually breached by later permutations.  In addition to thousands of smaller companies, victims of this attack included the British National Health Service, FedEx, Telefonica Spain, and Deutsche Bahn.

WannaCry utilized a Windows Server Message Block (SMB) exploit, a known issue that had been addressed by Microsoft in March of 2017 with a security patch.  However, due primarily to lax approaches to Endpoint Security and Patch Management, millions of machines around the world remained unpatched and vulnerable to the worm which, once inside, encrypted critical client data and demanded payment for its release.  The propagation of WannaCry became so severe that Microsoft broke corporate protocol to release a critical security patch for unsupported systems still running on Windows XP and Windows 2003.

 

Hundreds of Billions or Trillions; Cyber-Crime will cost the global economy dearly.

 

New malware and worms are released “into the wild” on a daily basis.  Even in the short weeks since WannaCry, two major pieces of malware, Fireball and EternalRocks propagated using similar exploits.  In the case of WannaCry, a hacking group known as the Shadow Brokers leaked the Windows exploit, which was discovered originally by the United States’ National Security Agency (NSA), but not reported to Microsoft.  It is believed that the Shadow Brokers group first learned of the exploit when it was revealed in a large dump of NSA tools by Wikileaks.  While analysts’ estimates vary greatly,  it is a certainty that the next 5 years will carry a cost of many billions, or even several trillions of dollars.  Considering the current estimated value of the global economy is only 78 trillion dollars, the implication is clear.

The nature of worms such as WannaCry is that they can cause a major security breach via even a single vulnerable machine on a network.  There was already a two-month old security patch from Microsoft when WannaCry began to spread in mid-May.  However, unless machines were fully patched from that version forward, they were left critically vulnerable.  While device-based edge security is a common area of focus, companies typically do not work proactively at aggressively patching and securing end-points, because it can be a cumbersome difficult task that is time-consuming for any size IT team.

 

CIS takes a layered pro-active approach to help clients protect themselves as much as possible from the next malware/ransomware outbreak.

 

The CIS Managed Services team takes a security-oriented pro-active approach to managing the endpoint.  CIS engineers and technicians work with clients to make recommendations on what can be reasonably done to mitigate the likelihood of falling victim to an attack.  CIS experts continually monitor the state of global malware attacks, as well as work with organizations such as Microsoft when critical patches are released.  As part of the Managed Services program, the CIS team works with clients to regularly review and update patches to the most current secure version.  As one of the most popular attack vectors centers around the user, not the machine, end-user training and awareness seminars are an added service that significantly improves client’s odds of avoiding the next attack.

CIS takes a layered approach to management and security of the endpoint.  Among the first tasks for any new managed services client is a full review of the entire environment, with the goal of creating a common baseline of software versions, patches, and security.

Supported Software:  CIS recommends that clients use only manufacturer-supported software that gets security updates, such as Windows 10 and Windows Server 2016.  This also extends to 3rd party software such as modern versions of line-of-business applications.  If there are any machines that use Windows XP, Windows 2003, or other non-supported Operating Systems, they must be replaced or upgraded immediately.

Patching: CIS’ approach to patch management is that monthly is good, weekly is better, and daily is best.  While daily patching is not practical for all organizations, CIS strongly recommends setting up at least monthly patch cycles for workstations and servers.  Companies following this simple rule in March would have been completely protected from WannaCry in May.

Email Protection:  All email must be filtered and scanned, at a minimum by native tools within a service such as Microsoft Office 365 or Google Apps, but ideally utilizing additional layers of protection such as MimeCast or SpamStopsHere.  All links in emails should be scanned to ensure that they are not hiding malicious executable code.  End user IT Security Best Practices education should be rigorous and continual.

Anti-Virus:  CIS recommends anti-virus solutions that are not reliant on public definitions, as many traditional anti-virus products are.  When a new virus is released, if the A/V application doesn’t have a definition for it already it will be vulnerable, this is known as a zero-day attack.  CIS recommends a specific Anti-Virus product, which will not be named here, in the interest of client security, which is cloud-based and uses behavior analysis, rather than definitions, to identify malware and viruses, as it scans everything in use.

Anti-Malware:  All CIS Managed Services clients receive an additional layer of protection with an Anti-Malware platform that does utilize definition-based scans, as a secondary protocol, in the event that the primary cloud-based A/V application does miss something.

Edge Appliance:  Depending on the size and operational budget of a client, CIS recommends either SonicWALL or Cisco current-generation firewalls, with full security services, to block malicious code before it gets to the network.

End-Users:  User behavior and training can have a huge impact on security.  Supporting users directly and regularly conducting best practices seminars and hands-on training can help ensure users are following smart computing practices.

Security Testing – In addition to the standard pro-active services provided by the Managed Services team, CIS has a Network Security focused team that conducts high-level network penetration testing, vulnerability analysis, phishing and spear-fishing testing, and other social engineering, to help ensure that security standards are met or exceeded.

While there may be no such thing as TOTAL security, CIS’ philosophy is that if breaching the network is made difficult enough, the odds are strongly in your favor that the attack will simply move along to the next potential victim.  Taking a Managed Services approach to endpoint and network security will help ensure that critical company data is protected against attack vectors of all sorts.


“WannaCry” but Keep Calm & Don’t Panic…

May 15th, 2017

As you have no doubt read recently, we have an unprecedented global malware situation that has been directly impacting human lives around the world over the last few days.  As your trusted technology advisor, CIS will provide further information as more details become available.  Action must be taken as quickly as possible to protect exposed systems, we have given this maximum priority and are leveraging the entire CIS team to address all possible solutions in the most expedient manner possible.

An email to clients this morning included a screenshot of the National Health Service of the U.K.’s website’s posted outage message as one example of what has been taking place all over the world.  In this one case, a hospital’s non-emergency operations have been suspended and ambulances are being diverted as a result of the malware’s existence.  In other words, this cyber-incident can now be classified by some as “deadly.”  There are widespread examples of similar impacts to critical services from around the world, however they are currently of little consequence.  The most important thing to focus on is:  What happens now?

Immediate actions we recommend be taken include the following.  Be advised, while these items will help lessen risk they are NOT a guarantee that malware will not morph [change] into something else that will penetrate a network.

  •  Do not click on links, even though they appear legitimate.  Check to see where these links may take you.
  • Do not click on links sent via email, even if they are sent from friends.
  • Do not open attachments.
  • Deactivate all WI-FI equipment [tablets, cell phones, etc.] wherever possible.
  • Stay vigilant and propagate best practices to colleagues.

Please know that Computer Integrated Services is doing all we can to protect you.  We are enlisting all possible avenues to do our due diligence and keep you safe.  As information and mitigation procedures become available we will keep you informed.  At this point, nobody can state with 100% assurance, even with these best practices, that you will not be affected.

If you have any questions regarding this issue, please feel free to contact us.  We thank you for your business and your trust.


Navigating the 23 NYCRR 500 Financial Regulations w/ CIS & BeyondTrust

March 10th, 2017

The New York State Department of Financial Services (DFS), has released legislation:  23 NYCRR 500 to combat the persistent threat posed to information and financial systems by nation-states and independent criminal actors.  This regulation is designed to:

  • Promote the protection of customer information
  • Promote the protection of information technology systems of regulated entities
  • Require each company to assess its specific risk profile
  • Require each company to design a program that addresses its risks in a robust fashion
  • Require annual certification confirming compliance with these regulations by senior management

CIS and BeyondTrust have been monitoring the requirements of this new legislation, and recommend that anyone in the financial industry read the attachment below as a first-step in implementing a plan.

To discuss further, please contact a CIS rep at:  sales@cisus.com or 212-577-6033

NY State Financial Cyber Security Requirements