The use of artificial intelligence (AI) is already omnipresent in many areas of working life and in HR work. Nevertheless, German legislators have so far provided hardly any AI specific regulations in the context of employment. Employers in Germany are, however, not in a legal vacuum and must comply with various employment (and data protection) regulations when using AI. In future, employers will also have to observe the legal framework created by the recently adopted European Union’s AI Regulation laying down harmonised rules on artificial intelligence (AI Act).

Continue Reading Artificial Intelligence in German employment law – A status quo and an outlook on the recently adopted EU AI regulation

With the Euros kicking off on 14 June, people all over the UK and Europe are discussing strikers. While most in England are debating whether it should be Ivan Toney or Ollie Watkins as first-choice deputy for Harry Kane, in the employment law world we have been focusing on the strikers at the heart of an important new Supreme Court decision in Secretary of State for Business and Trade v Mercer.

In Mercer, the Supreme Court was asked to consider whether an employee is protected from retaliation if their employer suspends or disciplines them in an effort to deter them from going on strike, and whether or not section 146 TULRCA 1992 really protects employers rather than employees.

Continue Reading Striker! Does UK law adequately protect an employee’s right to strike?

To date, the UK government has adopted a “pro innovation” approach to AI regulation, refraining from legislation. This has been with a view to enable the UK to keep pace with rapid developments in AI.  However, this looks set to change with the recent publication of a first draft Artificial Intelligence (Regulation and Employment Rights) Bill (“the Bill”), potentially marking the starting point for more formal regulation, particularly in relation to workplace decision making by AI. This blog explores what the Bill proposes by way of regulation, and some practical tips for what employers can be doing now.

Continue Reading AI in the workplace – is regulation on its way in the UK?

On Monday, June 3, 2024, Attorney General Platkin and Director Sundeep Iyer of the New Jersey Division on Civil Rights (DCR) proposed a new rule (N.J.A.C. 13:16) that would clarify the legal standard and the burdens of proof for claims of disparate impact discrimination under the New Jersey Law Against Discrimination (LAD). 

The standard does not change the legal framework already applied by the courts in the employment context under the LAD, but this would resolve any question about the viability of a disparate impact claim and/or the framework to be applied.

Disparate impact discrimination occurs when a policy or practice that is neutral on its face has a disproportionately negative effect on members of a protected class. Such a policy is unlawful unless the policy or practice is “necessary to achieve a substantial, legitimate, non-discriminatory interest” and there is no “equally effective alternative that would achieve the same interest.”

Continue Reading Attorney General and DCR proposes rule to clarify disparate impact discrimination under the New Jersey Law Against Discrimination

The New Jersey Supreme Court’s recent ruling in Savage v. Township of Neptune, places limits on the enforceability of non-disparagement clauses in settlement agreements. The court unanimously held that such clauses are unenforceable if they prevent employees from discussing details related to claims of discrimination, retaliation, or harassment, aligning with protections under the New Jersey Law Against Discrimination (LAD).

Christine Savage, a former police sergeant, filed a lawsuit in December 2013 against the Neptune Township Police Department, alleging sexual harassment, sex discrimination, and retaliation. The parties entered into a settlement agreement which included a non-disparagement clause. In 2016, Savage filed another lawsuit against the same defendants, claiming they continued their discriminatory and retaliatory conduct. This second lawsuit was settled in July 2020, also with a non-disparagement clause in which both parties agreed not to“make any statements … regarding the past behavior of the parties, which statements would tend to disparage or impugn the reputation of any party.”

Continue Reading New Jersey Supreme Court limits use of non-disparagement provisions in New Jersey LAD settlements

On May 6, 2024, the California Supreme Court issued a significant ruling in Naranjo v. Spectrum Security Services, Inc. (Case No. S279397). The decision provides much-needed clarity on California’s wage statement requirements and also held that employers can assert a good faith defense to wage statement claims under appropriate circumstances.

Labor Code section 226 states that California employers must provide employees with accurate itemized wage statements. Employees can seek statutory penalties if an employer fails to provide accurate itemized wage statements and such failure is “knowing and intentional”. (Lab. Code, section 226, subd. (e)(1).). While the statutory penalties are capped at $4,000 per employee (in addition to the employees’ associated attorneys’ fees and costs), the aggregated wage statement penalties can add up quickly in the class action context.

Continue Reading Key victory for California employers: California Supreme Court accepts good faith defense to wage statement violations

Shortly after the DOL’s release of guidance on the use of AI in the workplace, a bipartisan working group from the U.S. Senate and the Biden administration have released additional guidance regarding the use of AI in the workplace.

Bipartisan Senate AI Working Group’s “road map” for establishing federal AI policies

On May 15, 2024, the Bipartisan Senate AI Working Group released a “road map” for establishing federal AI policies. The road map is titled “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate,” and outlines the opportunities and risks involved with AI development and implementation. Most notably, the road map highlights key policy priorities for AI, such as: promoting AI innovation, investing in research and development for AI, establishing training programs for AI in the workplace, developing and clarifying AI laws and guidelines, addressing intellectual property and privacy issues raised by AI and creating related protections for those affected, and integrating AI into already-existing laws.

The working group acknowledged that the increased use of AI in the workplace poses the risk of “hurting labor and the workforce” but also emphasized that AI has great potential for positive application. This dichotomy necessitates the advancement of additional “innovation” that will create “ways to minimize those liabilities.”

Continue Reading Senate Working Group and Biden administration guidance on the use of AI in the workplace

On April 24, 2024, the U.S. Department of Labor (DOL) issued guidance on how employers should navigate the use of Artificial Intelligence (AI) in hiring and employment practices. The DOL emphasized that eliminating humans from the processes entirely could result in violation of federal employment laws. Although the guidance was addressed to federal contractors and is not binding, all private employers stand to benefit from pursuing compliance with the evolving expectations concerning use of AI in employment practices.

The guidance was issued by the DOL’s Office of Federal Contract Compliance Programs (OFCCP) in compliance with President Biden’s October 30, 2023 Executive Order 14110, which required the DOL to issue guidance for federal contractors on “nondiscrimination in hiring involving AI and other technology-based hiring systems.”

The guidance was issued in two parts: (1) FAQs regarding the use of AI in the Equal Employment Opportunity (EEO) context, and (2) a list of “Promising Practices” that serve as examples of best practices for mitigating the risks involved with implementing AI in employment practices. In short, the FAQs communicate that established non-discrimination principles apply to the use of AI, and the “Promising Practices” provide specific instruction on how to avoid violations when using AI in employment practices.

Continue Reading DOL’s guidance on use of AI in hiring and employment

On 14 May 2024, the government and financial services regulators published their responses to the recommendations made by the Sexism in the City inquiry. Those hoping that the inquiry would quickly lead to solid commitments for reform to tackle sexism in financial services may be somewhat disappointed. While the inquiry certainly created momentum around the discussion, the current government does not intend to push forward legislative changes, and the two regulators (the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA)) are still deep in review of their policy direction, although they have set some expectations about their priorities.

In this blog, we look at the background to the Sexism in the City inquiry, the current status in respect of the inquiry’s recommendations, and where this leaves financial services organisations.

What is the Sexism in the City inquiry?

Launched in July 2023, the House of Commons Treasury Committee’s inquiry was intended as a follow up to the Women in Finance inquiry from 2017. The 2023 inquiry set out to explore progress on issues affecting women in financial services, including the removal of barriers to entry and progression to successful careers, representation at board level, pay gaps, and misogyny and harassment. 

After months of reviewing written evidence, hearing oral evidence and holding focus groups, the Committee published its report on Sexism in the City on 5 March 2024. While the report noted some improvement for women in financial services since the 2017 inquiry, particularly on female representation in senior roles, it also expressed disappointment at the lack of progress on improving instances of non-financial misconduct (e.g. sexual harassment and bullying) against women and the generally poor culture continuing to cause challenges for women in the industry. The inquiry made a number of recommendations to government, and the two regulators, to accelerate change.

How have the government and regulators responded to the inquiry’s recommendations?

Two months after the Committee’s report, the response from HM Treasury, the FCA and PRA has been published. Whilst there is a broad agreement with the Committee’s comments and sentiments about the need for improvement, and various explanations about what steps have already been taken or are currently ongoing, the government and regulators largely stopped short of committing to prompt and significant changes in line with the recommendations.   Continue Reading What next for women in financial services? The government and regulators respond to recommendations from the Sexism in the City inquiry

Today, the Supreme Court justices ruled unanimously in Smith v. Spizzirri, No. 22-1218, that cases involving arbitrable disputes subject to the Federal Arbitration Act (FAA) must be stayed rather than dismissed outright. As a matter of statutory interpretation, the Court reasoned that the words “shall” and “stay” in Section 3 of the FAA had plain meanings requiring that a Court hold a matter pending arbitration and that dismissal would not meet the call of the statutory language. The Court further reasoned that requiring a case to survive pending arbitration promotes judicial efficiency, as FAA Sections 5, 7 and 9 prescribe specific supervisory roles for courts.

Employers can expect that cases brought despite arbitration agreements subject to the FAA will persist, albeit on pause pending the results of arbitration. Given the Court’s reliance on specific statutory language, the Court’s opinion likely does not apply to arbitrable disputes pursuant to a collective bargaining agreement, which are governed by the Labor Management Relations Act, a statute that does not have a provision similar to Section 3 of the FAA.