A New Frontier In Law Firm Cyber Risk: Client Class Actions

That an actual breach of client information could expose your law firm to legal and business risks is unsurprising.  The risks posed by a potential breach, however, may be something your firm has not yet carefully considered – but needs to.  As we discussed during our recent webinar, law firms face a variety of cybersecurity-related risks.  Firms have been targeted by cybercriminals with increased frequency in the past few years, and clients are growing concerned.  In at least one instance – and likely more to follow – this concern has resulted in litigation between firm and client over the adequacy of the firm’s cybersecurity safeguards.

In April 2016, clients of a Chicago-based firm, Johnson & Bell, filed a class action lawsuit alleging that the firm failed to adequately safeguard their information.  The case, which was subsequently moved to arbitration, is now back in the news.  On March 28, 2017, Johnson & Bell sued Edelson PC, the firm representing the client class, for defamation.  In its complaint, Johnson & Bell alleges that “[t]he Edelson defendants have engaged in numerous violations of their ethical duties, have illegally abused the process of the courts to further their own self-aggrandizement, and have engaged in a self-serving publicity tour spreading their lies and defamatory statements about J&B.”  Perhaps ominously, Edelson has announced that the Johnson & Bell case is just its opening salvo; it plans to assert similar claims on behalf of clients of 15 other firms.

Cybersecurity Cartoon

The Johnson & Bell Complaint, which was made public last December, is notable for a number of reasons.

  • First, it homes in on several of the potential vulnerabilities firm systems may be subject to, such as the high incident of employees working remotely, or the fact that less well-protected systems, like those for timekeeping or email, can serve as gateways to systems holding more sensitive data.
  • Second, the Complaint identifies categories of sensitive data that many firms are likely to maintain, such as financial records, trade secrets, sensitive communications, and personal information.
  • Third, it contends that there’s an “industry standard” level of data security that any firm charging and collecting market-rate attorneys’ fees must provide. This is significant because there are indications that the “industry standard” (or “reasonable”) level of protection that the law imposes on businesses is likely to become more expansive and onerous in coming years.
  • And fourth, in addition to seeking damages and attorneys’ fees, the Johnson & Bell Plaintiffs are seeking to compel a security audit by an outside auditor. This audit would, among other things, reveal whether the firm has conducted a thorough risk assessment, and whether it has developed a sufficiently robust data security plan that includes written policies and procedures, employee training, and vendor management processes.

The prospect of client lawsuits provides a compelling reason to take prompt and committed action on the cybersecurity front – even if your firm has not yet experienced a breach. For guidance on how firms can prevent and respond to cybersecurity incidents, please check out our past post on this topic, and please tune in for our upcoming webinar on April 19.

Association of Corporate Counsel Develops Model Information Protection and Security Controls for Outside Vendors, Including Outside Counsel

The Association of Corporate Counsel (ACC), which represents over 42,000 in-house counsel across 85 countries, recently released its ACC Chief Legal Officers (CLO) 2017 Survey which found that two-thirds of in-house legal leaders ranked data protection and information privacy as ‘very’ or ‘extremely’ important.  In response to this growing concern, the ACC recently released “first-of-its-kind” safety guidelines to help “in-house counsel as they set expectations with their outside vendors, including outside counsel.” Firms concerned about facing these guidelines should review their cybersecurity risk management policies, procedures and practices [webinar].

The Controls

The Model Information Protection and Security Controls for Outside Counsel Possessing Company Confidential Information (“the Controls”) were developed in a joint effort between in-house counsel members of the ACC together with several law firms specialized in data security related issues. This joint effort signifies the importance of cohesion between in-house and outside counsel when handling sensitive corporate data.  “We are increasingly hearing from ACC members, at companies of all sizes, that cybersecurity is one of their chief concerns, and there is heightened risk involved when sharing sensitive data with your outside counsel,” said Amar Sarwal, ACC vice president and chief legal strategist.

The Controls address a broad range of data security related measures including: data breach reporting, data handling and encryption, physical security, employee background screening, information retention/return/destruction, and cyber liability insurance. Particular measures may be too burdensome under the circumstances, while the Controls as a whole may not be sufficient to satisfy applicable legal requirements such as the HIPAA privacy and security rules for business associates. Still, the Controls include a number of measures firms will have to consider carefully. For example, the Controls suggest that outside counsel be required to maintain

logical access controls designed to manage access to Company Confidential information and system functionality on a least privilege and need-to-know basis, including through the use of defined authority levels and job functions, unique IDs and passwords, [and] two-factor or stronger authentication for its employee remote access systems.

The Controls also would require outside counsel to be responsible for its subcontractors with access to confidential information, including by requiring those subcontractors to abide by the Controls. As for data breach notification, the Controls recommend a short time frame – under the Controls, outside counsel would be required to notify a client within 24 hours of discovering an actual or suspected incident.

It is the hope of the ACC that the Controls will serve as a “best practice”, standardizing the protocols companies implement when interacting with third-party vendors who may have access to sensitive corporate data, and ensuring that adequate protections are in place to prevent and respond to a data breach. Law firms should not be surprised to see these Controls, in one form or another, included in litigation and other guidelines mandated by their corporate clients.

Virginia Responds to W-2 Phishing Scams with First of Its Kind Notification Requirement

As previously highlighted, in early February, the IRS issued a warning to all employers regarding the resurgence of a W-2 based cyber scam. Since the IRS warning, this type of scam has taken numerous victims.  On February 15, 2017, Virginia Wesleyan College released a notice stating that the 2016 W-2 tax form information of its employees had been sent that day to an unauthorized third party as a result of an email scam.  The information was sent by an employee who believed a spear-phishing email was a legitimate request for W-2 forms.

In light of the IRS warning, together with the Virginia Wesleyan College phishing scam, on March 13, 2017, Virginia Governor Terry McAuliffe approved, a first of its kind, amendment to Virginia’s data breach notification statute. The new amendment requires employers and payroll service providers to notify the Virginia Office of the Attorney General of “unauthorized access and acquisition of unencrypted computerized data containing a taxpayer identification number in combination with the income tax withheld for an individual”.  Notably, notice is required even if the breach does not otherwise trigger the statute’s requirement to notify affected residents of a breach.

Notice to the Office of the Attorney General of a breach of computerized employee payroll data must include the affected employer or payroll service provider’s name, and federal employer identification number. Following receipt of notice, the Office of the Attorney General is then required to notify Virginia’s Department of Taxation of the breach.

This amendment to the Virginia statute becomes effective July 1, 2017, and in light of the growing concern for W-2 phishing scams it would not be surprising if other states follow suit. Employers should advise their staff to exercise caution when responding to requests for W-2 forms and confirm verbally that the request is valid.

Will More States Follow New York’s Lead?

As you know if you regularly read this blog, the New York State DFS finally finalized its “first-in-the-nation” cybersecurity rules with an effective date of March 1, 2017. And their reach is quite large: DFS-supervised entities from insurers and banks to mortgage brokers and credit unions (and their third-party service providers) will have to begin assessing their cybersecurity risks and responding with detailed cybersecurity programs headed up by chief information security officers. Various compliance deadlines under these regulations range 180 days after the effective date of the regulations to two years after the effective date for third-party service providers. For more information on the development and requirements of the DFS cybersecurity regulations see our articles: Getting Prepared for the New York Department of Financial Services’ Proposed Cyber Security Regulations, and New York Releases Revised Proposed Cyber Security Regulations.

Although the requirements are burdensome and the goals of the regulations lofty, a recent announcement from the New York Attorney General may make them more politically palatable. Last week Attorney General Eric Schneiderman announced a record number of data breach notices were received by his office in 2016, with breaches increasing 60% over 2015. In total, nearly 1,300 breaches were reported that exposed the personal records of nearly 1.6 million New Yorkers, though “mega-breaches” appeared to decline from the previous decade. Of the reported breaches, financial account information and Social Security numbers were the most frequently acquired information, together accounting for 81% of the breaches. Thus, although the DFS cybersecurity regulations were years in the making, their issuance on the heels of a year of record data breaches may yet prove prescient.

At the federal level, the tide seems to be turning the other way. The Trump administration’s “skinny budget” did include a $1.5 billion allocation to the Department of Homeland Security to fund various cybersecurity efforts from critical infrastructure protection to information sharing between federal agencies and the private sector. But budget cuts to other agencies may paint a more accurate picture of the administration’s cybersecurity priorities. For example, President Trump did not re-up President Obama’s pot of funds to be broadly distributed across the federal government for more widespread initiatives such as moving to multi-factor authentication, updating federal agencies’ severely outdated computer systems, and money to hire more qualified cybersecurity professionals into the federal workforce.

On top of Trump’s budget blueprint lacking this broader allocation of funds, the administration’s budget also proposes actual cuts to many agencies that house the personal information of U.S. citizens, including the SSA, ED, IRS, HUD, and HHS, among others. This budget proposal was released less than a week after a report from the White House’s OMB was released, which found that federal agencies suffered over 30,000 cyber incidents in 2016, and highlighted the need for departments across the federal government to strengthen their IT systems. Faced with potential budget cuts, a panel of federal agency Inspectors General testified before a House Appropriations subcommittee in early March that the cuts will force their agencies to make difficult decisions between modernizing and updating IT systems and maintaining or reducing the services they provide.

In Congress too, privacy priorities have shifted. Last week the Senate passed a resolution repealing broadband privacy rules issued by the FCC last year using the Congressional Review Act. This followed an FCC vote earlier in March, led by the newly-installed Commissioner, to stall the implementation of the data security portion of those rules. Commissioner Ajit Pai framed the votes as an effort to ensure that FCC rules are aligned with the approach to privacy regulation that the FTC has pursued, and added that the FCC is open to moving forward with a new framework. The House voted on Tuesday to pass the Senate’s resolution, which, if signed by President Trump, could leave a gap in federal privacy protections for internet consumers and cybersecurity regulations for internet service providers and those entities that collect and store consumers’ information.

Interestingly, the day after the House voted to pass the Senate’s resolution repealing the FCC’s privacy protections, a bipartisan group of senators introduced a bill called the Main Street Cybersecurity Act, aimed at assisting small businesses grapple with cybersecurity risks. In addition, Democratic legislators wrote a letter to the FCC on Tuesday urging the regulatory body to take action on the raising risks of cellphone cybersecurity. So there are some in the federal government that recognize resources and regulation may be needed to protect consumers.

Several states, however, have already followed New York’s lead to bridge the federal privacy and cybersecurity gap, including California and Connecticut’s recently updated laws limiting government access to email and other online communications and Illinois’ consideration of a “right to know” bill to let consumers find out the information certain internet companies collect about them. Unlike the DFS cybersecurity regulations, these and other such state privacy initiatives in New Mexico, Nebraska and West Virginia focus on the privacy of individuals rather than the strength of data collectors’ IT systems. The laws nevertheless do create regulatory requirements for the data collectors, and regulations directly governing these entities’ cybersecurity practices and preparedness may not be far behind as the discussion of privacy intensifies. The Connecticut Department of Banking, for example, has said that it is open to adopting new provisions to regulate cybersecurity after a review of New York’s regulations.

With these concerns finding champions in a few statehouses across the country, residents of states without these privacy protections may soon start to pressure their own state legislators and regulators to follow suit. Since privacy and cybersecurity are apparently areas where legislators are willing to reach across the aisle to protect their constituents’ (and frankly their own) private data, entities that operate in multiple states or across state lines could face a wrangled web of competing regulation as multiple states move to act where the federal government is not.

Jackson Lewis attorneys will continue to monitor these developments at both the federal and state levels, and are available to help your organization know what it has to comply with and when.

Companies May Soon Have a New Defense Against Cyber-Attacks

Co-author: Devin Rauchwerger 

The Active Cyber Defense Certainty Act is a new bill that is gaining positive bipartisan support and significant interest from business communities, lawmakers and academics. The proposed bill amends the Computer Fraud and Abuse Act which does not provide adequate deterrence for criminal hacking. The new bill is aimed at helping businesses that are falling prey to cyber criminals defend themselves online by giving victims of computer intrusions unprecedented rights.

Previously, under the Computer Fraud and Abuse Act, a company was either required to enlist local law enforcement after the fact or risk facing prosecution for hacking back. The new bill affords a victim with a number of defensive measures. Specifically, under the bill, a victim of a cyber-attack can access without authorization the attacker’s computer to gather information in order to establish attribution of criminal activity, including sharing information with law enforcement and stopping unauthorized activity against the victim’s network. However, a victim can not destroy information on the hacker’s computer, cause physical injury to another person, or create a threat to the public health or safety.

There are several concerns, however, about the proposed bill that have sparked debate. Giving companies the ability to hack back may not be the best approach to defend against cyber attacks. Instead, it may be more effective and prudent for companies to engage the assistance of law enforcement, government agencies and internet service providers. Also, giving companies the ability to attack the computers of suspected hackers can lead to potential national security concerns; if, for instance, the hacker is a foreign country. There are also ethical considerations that must be considered with hacking-back, such as causing harm to innocent parties.

Under the bill, the fact that the protection afforded the victim disappears if the victim “destroys the information stored on a computer of another” is also potentially problematic. The statute does not currently differentiate between purposeful destruction of information compared to accidental destruction. Companies may be weary to act if they lose the protection by accidentally destroying information in their attempt to stop the cyber-attack. The current language also suggests that a company cannot destroy whatever partial information the cyber-attacker illegally obtained from the victim.

Notably, there are also drafting issues with the bill. Several terms in the act are vague and open the door to a variety of problems. For example, the term “victim” is defined as “an entity that is a victim of a persistent unauthorized intrusion of the individual entity’s computer.” The term “persistent” is difficult to define: Is persistent measured in terms of the number of separate cyber-attacks that a company falls victim to or is it the duration of one particular cyber-attack that matters? Theoretically under the current language, a victim of a cyber-attack lasting only 30 seconds may not be afforded the protection of this Act.   For all these reasons, the bill will likely need significant revisions before it will pass.

While there are still several kinks that need to be worked out, this is clearly a positive step towards companies being able to defend themselves from cyber-attacks without facing legal repercussions.

At Last, the Final DFS Cybersecurity Regulations….

We wanted to keep you informed on the progress of the DFS cybersecurity regulations, as they complete their journey through the approval process. DFS has been working on the regulations since its 2013-2014 studies on cybersecurity risks to financial institutions. As reported in our article, Getting Prepared for the New York Department of Financial Services’ Proposed Cybersecurity Regulations, the original proposed regulations were published on September 28, 2016. The revised regulations were published on December 28, 2016 (see our article, New York Releases Revised Proposed Cybersecurity Regulations). A Notice of Adoption was published in the New York State Register last week announcing the adoption of the final regulations.

The Notice of Adoption noted that DFS received 60 comments on the revised regulations published on December 28, 2016 – but that it determined that most of the comments made were either unnecessary changes to the scope, wording or meaning of the regulations. Several comments were received asking DFS to hold off on finalizing the regulations until the federal government had implemented regulations, or to make efforts to harmonize the DFS regulations with existing (or proposed) state and federal standards, but DFS rejected these suggestions, stating that “it is vitally important to establish regulatory minimum standards for cybersecurity practices to address challenges currently facing the New York financial services sector.”

So what changes were made? To recap, the December 28, 2016 regulations made several key changes (see our article, New York Releases Revised Proposed Cybersecurity Regulations). The final regulations include “nonsubstantive” changes to several sections, including:

  • A tweak to the definition of “penetration testing” in Section 500.01(h) (“unauthorized penetration testing” was changed to “penetration testing”);
  • The responsibilities for implementing a cybersecurity program in Section 500.02 were clarified by language that states the covered entity may adopt “the relevant and applicable provisions of” a cybersecurity program that its affiliates maintain;
  • The Penetration Testing and Vulnerability Assessment provisions of Section 500.05 were revised to delete some duplicative language relating to periodic penetration testing;
  • The required retention period for audit trails Section 500.6 was decreased to three years (from five);
  • The events that must be reported to DFS under Section 500.17 were clarified (by eliminating some extraneous language) and language was added to clarify that the annual report would cover the prior calendar year;
  • The exemptions in Section 500.19 were revised to clarify that the thresholds in (a) are applied taking into account the operations of a covered entity and its affiliates, to clarify the scope of the exemptions (a new (d) was added to exempt Article 70 entities) and to clarify that notices of exemption must be filed within 30 days of making the determination that the entity is exempt.

The transitional periods for compliance with the final regulations have not changed in the final rule. Compliance is required within 1 year for the regulations relating to:

  • Annual reporting to the covered entity’s board
  • Penetration testing and vulnerability assessments
  • Risk assessments
  • Multi-factor authentication
  • Cybersecurity awareness training

Compliance is required within 18 months for the regulations relating to:

  • Audit trails
  • Application security
  • Limitations on data retention
  • Monitoring the activity of authorized users
  • Encryption of nonpublic information

There is a two year period for compliance with the third party service provider provisions; for all other provisions, entities should be in compliance within 180 days.

The final regulations continue to define a “Covered Entity” as “any Person operating under or required to operate under a license, registration, charter, certificate, permit, accreditation or similar authorization under the Banking Law, the Insurance Law, or the Financial Services Law.” Please see our earlier article, Getting Prepared for the New York Department of Financial Services’ Proposed Cybersecurity Regulations, for additional discussion of the coverage of the regulations, and our article New York Releases Revised Proposed Cybersecurity Regulations for a discussion of the exemptions under the revised regulations.

 

Employer Denied Access to Employee GPS Data

A federal district court in Indiana recently denied an employer’s motion to compel discovery of employee GPS data in defense of an action brought under the Fair Labor Standards Act (FLSA).   Crabtree v. Angie’s List, Inc.

Plaintiffs asserted claims for denial of overtime pay during a one-year period in which they worked as Senior Sales Representatives. Plaintiffs often used their personal electronic devices for work purposes. In order to defend Plaintiffs’ claims that they worked 10-12 hours per day, the employer sought GPS and location data from the Plaintiffs’ phones to “construct a detailed and accurate timeline of when Plaintiffs were or were note working.” Plaintiffs objected contending that the request implicated significant privacy concerns especially where the GPS data would not necessarily accurately reflect whether Plaintiffs were working at any given time.

One of the reasons for the employer’s request was because the manner in which the company kept track of the employees’ time was by recording the time the employees logged into and out of their computers’ SalesForce software. However, that system did not necessarily reflect if employees were not working even though they were logged in. Thus, the company sought the GPS data to help confirm the employees’ physical location.

In analyzing the employer’s motion to compel, the court expressed concern with the fact that disclosing GPS data from a personal device covering 24 hours per day for one year would result in tracking Plaintiffs’ movements well outside of their working time. The court stated that the employer “has not demonstrated that the GPS/location services data from Plaintiffs’ electronic devices would be more probative” than data already in the company’s possession. As such, the court ruled that the examination of Plaintiffs’ personal devices “is not proportional to the needs of the case.”

Based on this case, it is apparent that courts continue to be protective of personal data. As we have recently emphasized, it is important for employers to find methods by which they can accurately track employee work time without having to rely on data from the employee’s personal devices.

Eleventh Circuit Upholds Company Claims Against Former Executive For Unlawful Access to Email

A terminated executive who accessed co-worker emails in the process of reporting possible company wrongdoing lost his appeal on several grounds. In Brown Jordan Intl, Inc. v. Carmicle, the Eleventh Circuit found that the employee violated both the Stored Communications Act (SCA) and the Computer Fraud and Abuse Act (CFAA).

Carmicle reported to the company concerns about the preparation of a second set of financial projections to the detriment of shareholder value. Carmicle acknowledged that he obtained much of the information by secretly accessing co-worker emails. He did so by using a universal password issued as part of an email conversion after employees failed to create their own personal password. Carmicle subsequently was terminated after an investigator found his allegations of impropriety were without merit (among other reasons).

The appellate court upheld the ruling that Carmicle violated the CFAA despite his argument that Brown Jordan suffered no “loss” as required by the law. Carmicle argued that there was no damage because the company did not experience an “interruption of service” and there was no damage to the computers.   However, the company maintained it suffered a loss by, among other things, engaging an outside consultant to assess how Carmicle accessed the emails. Based on this expense, the appellate court found the company sustained a “loss” under CFAA. The court held that “loss” can include the reasonable costs incurred in connection with responding to a violation, assessing the damage done, and restoring the affected data to the condition prior to the violation.

Finally, the court rejected Carmicle’s argument that his access was authorized under the SCA based on a company policy stating that employees have no expectation of privacy and that the company has the right to monitor email communication. The Eleventh Circuit found that it would be “unreasonable” to permit someone to exploit a generic password to access emails without prior authorization and without any suspicion of wrongdoing.

Notwithstanding the outcome in this case, companies are reminded to take steps to ensure privacy protocols are in place and up-to-date. In this day and age, it is reasonable to assume that someone – whether from outside the company or within – may seek access to your network.

$3.2M Fine for Failure to Protect Electronic Records

The Department of Health and Human Services Office of Civil Rights (“OCR”) fined a Texas hospital $3.2 million for its impermissible disclosure of unsecured electronic protected health information (ePHI) and non-compliance over many years with multiple standards of the HIPAA Security Rule.

Children’s Medical Center of Dallas filed breach reports with OCR in 2010 and again in 2013. The first report indicated the loss of an unencrypted, non-password protected BlackBerry device at the Dallas/Fort Worth International Airport on November 19, 2009. That device contained the ePHI of approximately 3,800 individuals. On July 5, 2013, the medical center filed a separate HIPAA Breach Notification Report with OCR, reporting the theft of an unencrypted laptop from its premises sometime between April 4 and April 9, 2013. The Hospital reported the laptop contained the ePHI of 2,462 individuals.

OCR’s investigation found that, despite knowledge of the risk of maintaining unencrypted ePHI on its devices as early as 2007 (identified through medical center’s own risk assessments), the medical center failed to implement risk management plans and failed to deploy encryption or an equivalent alternative measure on all of its laptops, work stations, mobile devices and removable storage media until at least April 9, 2013. When announcing the fine, OCR stated “a lack of risk management not only costs individuals the security of their data, but it can also cost covered entities a sizable fine.” This fine indicates that even with the change of administration, OCR seems likely to continue its aggressive approach to HIPAA enforcement.

This action demonstrates again the importance of creating a culture of security where your employees are cognizant of the potential ill-effects of failing to safeguard personal information. This is especially true as OCR’s enforcement activities are not simply focused on the harm to individuals, but instead focus on compliance. HIPAA covered entities and business associates should regularly assess their risk of disclosing protected health information and – -just as importantly – address the issues identified during those assessments which would include the implementation of appropriate safeguards and conducting regular HIPAA training for employees.

Expert Insights on Developing a Physical Security Program

In today’s digital age, security tends to be thought about in terms of firewalls, malware, encryption and other safeguards for electronic systems. But the security of those systems, as well as an organization’s facilities, people and other critical assets depends significantly on physical security as well. We are delighted to share below some thoughts from an ASIS board certified expert in security management, Scott Soltis, CPP and CEO of HMS Security & Risk Management Solutions.

The protection of assets in all forms, people, property and information is critical to the success of all organizations.  This article highlights access control and physical security models and summarizes many industry “best practice” concepts.

The need for physical security and premise protection has been in existence for thousands of years. Access control can be found in historical architecture.  Dating back to the time of Caesar, the need to protect a physical structure can be found by use of gates, walls and other barriers.  In the dark ages, many kingdoms were protected atop high mountains or hills, or used motes and drawbridges to keep unauthorized persons from gaining access to their castles.

With modernization, physical security has quickly transcended from traditional locks and keys to the most sophisticated computerized and network based electronic access control systems, which can utilize unique credentialing approaches to identify/authorize an individual into an area. As companies expand and compete in the global marketplace, security program are being pressured for more efficiency and cost reduction.  Companies with global competition, also face the threat of industrial espionage.

Workplace violence and active violence in the workplace remains a consistent threat to U.S. companies and organizations. While this article doesn’t focus on the importance of organizations having a comprehensive workplace violence prevention program, the existence of a successful physical security program provides a core-mitigating factor to protect employees against the threat of harm.  Physical security programs help to reduce business risks and susceptibility to lawsuits and civil litigation, and assist in the protection of the assets of an organization.

Developing a Physical Security Program

A typical physical security program requires multiple layers of protection with layers becoming progressively more difficult to access closer inward toward the asset. Each layer will have multiple controls that will aid in the protection of the assets.  The function of each of the physical security layers is to deter, detect, delay, deny, and defend against loss.

In order for the physical security program to be effective, it is incumbent on the organization to develop and maintain controls to include policies and procedures, personnel management and training, physical barriers and controls, access control equipment, and adequate reporting and records management processes or systems.

Prior to deploying a physical security program, it is recommended that a qualified security professional conduct a Threat, Vulnerability/Risk Assessment (TVRA). This assessment should include but not be limited to:

  • determining the existing levels of security,
  • identifying areas of improvement in the physical security program,
  • establishing the appropriate levels of protection needed, and
  • recommending controls to enhance the overall security program.

Following the completion of the TVRA, a security program can be designed/modified to meet the needs of the organization and ensure that the security program and is adaptable to manage existing as well as future threats. A well-implemented security program will include a continual improvement process that ensures the program is adjusted to environmental changes, and ensures regular updates that tests the effectiveness of the program elements.

Having a qualified security professional implement a security program will reduce an organization’s security risks and more importantly provide a method for organizations to meet the duty of care, which would be expected by its employees.

For more information on this topic, contact Scott Soltis at: scott.soltis@hmsent.com

LexBlog