FTC Lays Out Case For Punishing “Data Abuses”
Mandatory cyber incident reporting bill formally introduced; CNIL presses companies to comply with cookie law; Two U.S. agencies chart path on AI
This week, the Federal Trade Commission (FTC) held its sixth PrivacyCon, a commissioner and key advisor to the FTC’s chair proposed a new approach to how the agency polices privacy violations. This new course would reorient the FTC from going after entities that violate people’s privacy and data security to looking at the broader personal data ecosystem. In doing so, the FTC would seek stiffer penalties in settlements and court cases, including deletion of all personal data and algorithms related to the violations and disgorgement of profits associated with the violations.
Commissioner Rebecca Kelly Slaughter sketched a new, ambitious course for how the agency polices privacy under its existing powers. It must be stressed that what she proposes does not require Congress to add to the agency’s powers, but it would necessitate a conceptual shift in how the FTC uses its powers, which would undoubtedly be challenged in court if the agency started punishing entities for “data abuse.” Having said that, under Section 5 of the FTC Act, the agency has immense latitude:
Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful.
Kelly Slaughter began:
First, I know we are here today for “Privacy”Con, but I would like to challenge everyone to reject “privacy” as the animating framework for the important issues being discussed at today’s conference and among thought leaders generally with respect to our data-driven economy. Today’s agenda addresses algorithmic bias, issues around consent, misinformation during the pandemic, and special concerns related to kids and teens, as well as more conventional privacy concepts. The FTC has done work on all these fronts and has recently issued guidance on algorithmic bias and unfairness, and worked to reevaluate outdated and deceptive consent frameworks around dark patterns. These issues go way beyond “privacy” as it is traditionally conceived.
Kelly slaughter explained why the FTC, policymakers, stakeholders, and the public should reorient their thinking and move beyond what has traditionally been considered privacy:
§ The broad agenda of this PrivacyCon reflects a growing understanding that the data issues with which both the Commission and society at large are concerned have moved past the narrow framework of who has access to your personal data. This emerging understanding is why I prefer the term “data abuses” to the narrower language of “privacy.”
§ Words matter, and “data abuses” reflects the fact that rampant corporate data collection, sharing, and exploitation harms consumers, workers, and competition in ways that go well beyond more traditional or libertarian privacy concerns. We must examine a wide variety of data abuses, including questions of racial bias, civil rights, and economic exclusion, considering practices that undermine personal autonomy and dignity, and reevaluating damaging and dangerous business models and market practices. In addition to examining these practices, we need to consider what to do about the problems we find in the market.
It appears that Kelly Slaughter is urging stakeholders to look beyond the immediate concerns about privacy (e.g., Apple, Google, and many, many others track my every move through my phone) to the downstream effects of data collection and processing. She is arguing that while the former category of harm is significant, the latter is much more malicious, for “data abuse” impinges one’s civil rights, employment, housing, and financial opportunities, and one’s very self in many cases. Kelly Slaughter is implicitly arguing that data abuse threatens democracy and fundamental rights. Moreover, Kelly Slaughter presumably is suggesting that “data abuses” can be policed under Section 5 of the FTC Act, the proscription against unfair and deceptive practices.
And the second challenge I would like to issue today is the following: Can we move away from the outdated notice-and-consent model to govern questions surrounding personal data, and instead turn our focus to the underlying business structures and incentives that are anchored in indiscriminate collection and application of personal data to fuel data-driven business models such as behavioral advertising? It is this underlying incentive structure that has caused so many of the harms and privacy risks we’re here to discuss today.
Kelly Slaughter explained why she thinks changing the focus on how to address privacy from notice and consent to data minimization would be more effective:
Rather than focusing on opt-in versus opt-out, and whether privacy policies are clear enough, I believe we should be discussing the concept of data minimization, a principle that would ensure companies can collect only the information necessary to provide consumers with the service on offer, and use the data they collect only to provide that service. That minimization could be coupled with further use, purpose, sharing, and security requirements to ensure that the information companies can permissibly collect isn’t used to build tools or services that imperil people’s civil rights, economic opportunities, or personal autonomy. Corporate self-dealing is also a serious problem in the data ecosystem and as long as key digital markets are controlled by just a few large, data-hungry online platforms, both consumers and prospective entrants are at their mercy.
Kelly Slaughter may be viewing the collection and processing of personal data from the vantage of the sorts of problems that individuals cannot effectively protect themselves from or change (e.g. global warming, pollution, and other collective action problems.) She is likely addressing the number of choices everyone must make daily about which services, products, and websites to use. Also, one must accept or decline cookie settings on many websites and depending on the jurisdiction may be able to exercise certain rights, meaning more choices and more action. And all these choices are made in realms where dark patterns and flouting of laws and codes reign. Therefore, if privacy and personal data are indeed issues that one’s actions cannot meaningfully affect, then it makes sense to have governments step in and impose limits on those entities amassing everyone’s data.
And, according to Kelly Slaughter, she is not the only member of the FTC concerned about “data abuse.” She argued that there is “renewed energy” at the agency that has created space for “meaningful changes:”
The Commission has a shared concern about many of these practices and I’ve heard the call from members of the public at our two open meetings for us to take decisive action against these abuses. This moment of renewed energy at the FTC offers a window of time to catalyze meaningful changes in the markets and ensure that the data economy actually works for people, not just the largest corporate players.
Kelly Slaughter was careful to highlight that data collection and processing is also a competition issue as the Facebooks and Googles are sitting on troves of data that give them huge advantages:
of course, unchecked data collection is not just a consumer protection issue. It is also a competition issue, as the enormous amounts of data incumbents have collected gives them a profound advantage when competing against new entrants or seeking to enter new product markets themselves. We absolutely must look at these issues holistically, rather than myopically viewing them through the lens of either competition or consumer protection.
She may be suggesting that the FTC look into the data abuse practices from an antitrust angle, which would be a new approach regarding some of the largest technology companies.
Kelly Slaughter urged her colleagues at the FTC to move beyond the case-by-case common law approach that has characterized the agency’s enforcement of its statutory mandate. She may be proposing policy statements or rulemakings (the latter of which is very hard for the FTC):
I believe that the FTC has an obligation to use all the tools in its toolbox to address these issues. Simply challenging the application of abusive data practices on a case-by-case basis isn’t likely to bring about the systemic change we need to see in the market.
Recently appointed FTC Chief Technologist Erie Meyer echoed Kelly Slaughter. Meyer was named to her position by Chair Lina Khan, and it is unlikely Meyer would endorse public a position in scripted remarks at odds with Khan’s. Consequently, a fair reading of Meyer’s remarks is that Khan is in agreement, She said:
Before we kick offer today's event, I want to share a few places where the market should expect changes and how the FTC will approach its work when it comes to protecting the public from the misuse and abuse of data. The approach is not through a narrow lens of consumer protection. Data abuses don't happen in a vacuum. They're fed by incentives. Among them beating out competitors. So with that broader view, you can expect key changes in our work. We're going to make sure that data abusers face consequences for the wrongdoing and provide real help for affected individuals.
Obviously, Meyer is echoing Kelly Slaughter’s argument that the FTC should dispense with a privacy focus regarding personal data collection and processing and move instead to a more holistic model that encompasses so-called “data abuses.” What is more, Meyer is calling for new types of punishments to combat data abuses. One recent example that springs to mind is a settlement the FTC reached with Everalbum, Inc.:
As part of the proposed settlement, Everalbum, Inc. must obtain consumers’ express consent before using facial recognition technology on their photos and videos. The proposed order also requires the company to delete models and algorithms it developed by using the photos and videos uploaded by its users.
This sort of approach has been percolating in the minds of policymakers at the FTC. In an October 2020 speech, Kelly Slaughter advocated for new approaches to personal data cases, including disgorgement:
In some instances, our data privacy orders lack remedies that would directly help consumer victims. If monetary relief is not possible, consumers should still receive direct notice of the law violation, its possible impact, and any mitigation options available. If refunds and notice are both impossible, the Commission should employ creative approaches to mitigate consumer harm through admissions of liability, requiring opt-in regimes for existing customers, funding of education campaigns, disgorgement of data, or other creative solutions that might vary case by case.
Consequently, it is safe to assume the 3 Democratic FTC Commissioners will seek settlements for data abuses that require disgorgement of data and algorithms developed through the collection, processing, and use of such data. Also, the FTC could push entities to switch to an opt-in regime. In this vein, Meyer signaled the agency may be shifting away from pursuing only large fines and one-time admissions or disclosures of information as a means of changing practices prospectively:
When a firm breaks the law or worse, breaks the law over and over and over, regulators like the Federal Trade Commission need to design and impose remedies that fix things. Fixing things doesn't mean making a disclosure longer or a one-time fine bigger. It means making sure that the firm cannot and will not benefit from ill- gotten data including against their competitors. It means making sure that the rest of the industry is deterred from engaging in similar wrongdoing. [It] [m]ight mean that we need to look at restructuring business incentives or corporate structure.
Meyer advocated for victims of data abuse getting “actual help,” in part, through orders and settlements making companies delete data and algorithms and disgorge money obtained through data abuses. Of course, on this last point, disgorgement, a recent Supreme Court of the United States decision struck down a commonly used method the agency has wielded against companies to get ill-gotten funds (i.e., Section 13(b).) Absent a statutory fix (the House just passed the “Consumer Protection and Recovery Act” (H.R. 2668)), the FTC would need to use other powers to seek and obtain equitable monetary relief. Meyer floated the possibility the FTC would seek to treat data abusers the same way it currently does abusive debt collectors through a ban from debt collection activities. It is not clear the agency could do the same to data abusers.
Meyer then expanded on the importance of fighting data abuses:
Data abuse is not just an issue of privacy. It's a matter of civil rights and national security. People from communities whose rights and safety are constantly threatened can tell you, this isn't just about someone knowing what you've looked up online. The U.S. Department of Housing and Urban Development charged Facebook with violating the Fair Housing Act. The Department of Justice charged a zoom executive alleging that his actions led to people use Zoom's data to track down and intimidate family members of people who use the platform to discuss the Tiananmen Square massacre. There's been a 2,920% increase in reports of identity theft via government benefits this year. So what this means, for example, is when a bad actor applies for something like unemployment benefits using personal information gleaned from a data breach from one of these firms.
Meyer further argued that data abuses are systemic and not individual scandals and have real world effects on people. She continued:
A pandemic has sharpened the view of what happened to our country's resilience because of these data disasters. We're moving away from a legalistic approach. This means we'll be approaching investigations with a disciplinary lens including privacy engineers and designers, financial analysts and product managers and yes, technologists. This won't happen overnight.
In short, Meyer is spelling out an approach embraced by Commissioner Rohit Chopra who has consistently brought the view of an economist as opposed to a lawyer to the Commission’s work.
§ The United States (U.S.) Department of Commerce’s Bureau of Industry and Security (BIS) announced that it has “added 34 entities to the Entity List for their involvement in, or risk of becoming involved in, activities contrary to the foreign policy and national security interests of the United States.” BIS added that “[o]f these 34 entities, 14 are based in the People’s Republic of China (PRC) and have enabled Beijing’s campaign of repression, mass detention, and high-technology surveillance against Uyghurs, Kazakhs, and members of other Muslim minority groups in the Xinjiang Uyghur Autonomous Regions of China (XUAR), where the PRC continues to commit genocide and crimes against humanity.” BIS continued that “Commerce added another five entities directly supporting PRC’s military modernization programs related to lasers and C4ISR programs to the Entity List.” BIS stated:
o As part of this package, Commerce added eight entities for facilitating the export of U.S. items to Iran in violation of the Export Administration Regulations (EAR) or to entities on the U.S. Department of the Treasury’s Office of Foreign Assets Control Specially-Designated Nationals List. Commerce added another six entities for their involvement in the procurement of U.S.-origin electronic components, likely in furtherance of Russian military programs. Additionally, Commerce added one entity to the Military End-User List under the destination of Russia. Finally, Commerce removed one entity from the Unverified List, as a conforming change to this same entity being added to the Entity List for being involved in proliferation to unsafeguarded nuclear activities.
o The Entity List is a tool utilized by BIS to restrict the export, reexport, and transfer (in-country) of items subject to the EAR to persons (individuals, organizations, companies) reasonably believed to be involved, or to pose a significant risk of becoming involved, in activities contrary to the national security or foreign policy interests of the United States. Additional license requirements apply to exports, re-exports, and transfers (in-country) of items subject to the EAR to listed entities, and the availability of most license exceptions is limited.
§ The Commission nationale de l'informatique et des libertés (CNIL) sent out a second round of orders regarding cookies and is threatening to levy fines of 2% of worldwide turnover for failing to comply. In October 2020, CNIL adopted guidelines and recommendations on cookies it is now calling on entities to heed. CNIL explained:
o After the twenty orders sent in May 2021 calling for the recipients to bring themselves into compliance with the law, the CNIL chair gave formal notice to forty other organisations that still do not allow Internet users to refuse or accept cookies as easily. The CNIL will continue to monitor this issue and, if necessary, will take new corrective measures against non-compliant organisations.
o However, some organisations are still not in compliance with the legislation on cookies. This situation is not acceptable.
o Consequently, the CNIL chair has decided to issue new orders to around forty organisations with non-compliant practices. They have until September 6, 2021 to bring themselves into compliance.
o Those affected by these orders include:
§ four major digital economy platforms;
§ six major information technology hardware and software manufacturers;
§ six online consumer goods companies;
§ two major players in online tourism;
§ three car rental companies;
§ three major players in the banking sector;
§ two large local authorities;
§ two online public services;
§ an energy industry participant.
§ Senate Intelligence Committee Chair Mark Warner (D-VA) and Ranking Member Marco Rubio (R-FL), and Senator Susan Collins (R-ME) introduced the “Cyber Incident Notification Act of 2021” (S.2407), which tracks closely with the draft bill released in June (see here for more detail and analysis). In their press release, Warner, Rubio, and Collins contended that the “bipartisan legislation requiring federal agencies, government contractors, and critical infrastructure owners and operators to report cyber intrusions within 24 hours of their discovery.” They added:
o The legislation is in part a response to the hack of IT management firm SolarWinds, which resulted in the compromise of hundreds of federal agencies and private companies, and the May 2021 ransomware attack on the Colonial Pipeline, which halted pipeline operations temporarily and resulted in fuel shortages along the Atlantic seaboard of the United States, as well as a recent onslaught of ransomware attacks affecting thousands of public and private entities.
o Under existing law, there is currently no federal requirement that individual companies disclose when they have been breached, which experts have noted leaves the nation vulnerable to criminal and state-sponsored hacking activity. The bipartisan Cyber Incident Notification Act of 2021 would require federal government agencies, federal contractors, and critical infrastructure operators to notify the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) when a breach is detected so that the U.S. government can mobilize to protect critical industries across the country. To incentivize this information sharing, the bill would grant limited immunity to companies that come forward to report a breach, and instruct CISA to implement data protection procedures to anonymize personally identifiable information and safeguard privacy.
§ The Department for Digital, Culture, Media & Sport (DCMS) issued an “Online Media Literacy Strategy” because of a commitment made in the “Online Harms White Paper,” and “[t]he aim of the strategy is to educate and empower internet users across the UK to manage their online safety.” DCMS asserted in the strategy:
o Research has found that users generally lack the key skills and knowledge they need to develop strong media literacy capabilities. This can lead to a population at high risk of suffering negative impacts of online harms due to limited understanding about online safety, and a limited ability to navigate the online environment in a safe way.
o The UK already has a rich media literacy sector with over 170 organisations undertaking media literacy activity. However, extensive stakeholder engagement has highlighted key cross-sector challenges which act as barriers to improving user media literacy capabilities.
o The objective of the Online Media Literacy Strategy is to support organisations to undertake media literacy activity in a more coordinated, wide-reaching, and high quality way over the next 3 years.
o We believe that this will subsequently lead to improved media literacy capabilities for users in the UK. The Strategy will support this objective in four ways:
§ setting out a strategic direction for the future of media literacy in the UK
§ ensuring a coordinated approach to media literacy activity
§ addressing key gaps within the media literacy landscape
§ reducing barriers and creating opportunities for organisations undertaking media literacy activity
o We have worked closely with a range of stakeholders and other government departments, such as the Department for Education and the Home Office, to ensure this Strategy complements the existing media literacy landscape.
§ Representatives Tom Malinowski (D-NJ), Katie Porter (D-CA), Joaquin Castro (D-TX), and Anna G. Eshoo (D-CA) issued a statement “on reports that the NSO Group’s sophisticated Pegasus spyware was used by authoritarian regimes against peaceful activists and journalists around the world.” They argued:
o The United States government and our allies often partner with private companies to develop and provide to our national security agencies sensitive technologies. But we would never tolerate a company that contracts with the Pentagon to develop drone, or missile, or laser technology, and then sells that technology on the open market to governments that might use it against us. If hacking for hire companies continue to exist, clear rules must be established to ensure they only do business with governments in rule of law states.
o To that end, we call on the United States government to urgently:
§ Call out by name publicly and in reports provided to Congress private companies that sell cyber-intrusion tools to governments with a history of misusing them.
§ Consider the immediate addition of the NSO Group and any other company engaged in similar activities to the Entity List administered by the Commerce Department and consider the company’s abusive clients for sanction under the Global Magnitsky Act.
§ Establish by legislation or executive order a sanctions regime to hold accountable individuals and companies that sell these tools to authoritarian states.
§ Ensure that the NSO Group and companies engaged in similar activities do not access American investors funds—including through a potential IPO—through SEC regulations that would protect non-securitized capital from funding their activities.
§ Accelerate efforts to finalize accession to the Wassenaar Arrangement’s limited controls on cyber-intrusion tools, lead a multilateral initiative to impose strengthened controls with transparent human rights assessments on items with surveillance capabilities, and consider SEC regulations requiring companies to publicly disclose exports of technologies with surveillance capabilities and to carry out published human rights due diligence for any such exports.
§ Investigate and assess the possible targeting of American ‘journalists, aid works, diplomats and others’ with NSO Group's Pegasus spyware, determine whether America’s national security was harmed, and take steps to protect all Americans, including federal employees, from the threat posed by the growing mercenary spyware industry.
§ The National Institute of Standards and Technology (NIST) published for comment “A Proposal for Identifying and Managing Bias in Artificial Intelligence" (Special Publication 1270), that “report proposes a strategy for managing AI bias, and describes types of bias that may be found in AI technologies and systems….[and] is intended as a step towards consensus standards and a risk-based framework for trustworthy and responsible AI.” NIST further explained:
o The proliferation of modeling and predictive approaches based on data-driven and machine learning techniques has helped to expose various social biases baked into real-world systems. These biases and other related inaccuracies in automated systems can lead to harmful outcomes that chip away at public trust in technology. The paper proposes an approach for identifying and managing AI bias that is tied to three stages of the AI lifecycle, 1) pre-design, 2) design and development, and 3) deployment (and post-deployment factors). This approach is intended to enable AI designers and deployers to better relate specific lifecycle processes with the types of AI bias, and facilitate more effective management of it. This proposal is part of NIST’s broader work in developing a risk management framework for Trustworthy and Responsible AI.
§ The Department of Energy (DOE) issued Version 2.0 (V2.0) of the Cybersecurity Capability Maturity Model (C2M2), “a tool designed to help companies of all types and sizes evaluate and improve their cybersecurity capabilities,” per the agency’s press statement. The DOE further stated:
o The C2M2 updates address the evolving cyber threat and technology landscape. Today’s release of C2M2 V2.0 advances the Administration’s 100-day plan to confront cyber threats from adversaries who seek to compromise critical systems that are essential to U.S. national and economic security.
o In April 2021, as part of the Biden Administration's effort to safeguard U.S. critical infrastructure, the DOE launched a 100-day coordinated initiative to enhance the cybersecurity of electric utilities’ industrial control systems (ICS).
o The C2M2, which was first released in 2012, is designed to help energy sector organizations understand cyber risks to their information technology (IT) and operational technology (OT) systems and measure the maturity of their cybersecurity capabilities. The updated model reflects inputs from 145 cybersecurity experts representing 77 energy sector organizations. Updates address new technologies like cloud, mobile, and artificial intelligence, and evolving threats such as ransomware and supply chain risks, and ultimately support companies in strengthening their operational resilience.
§ The United States (U.S.) Department of Justice (DOJ) “formally adopted a new policy that restricts the use of compulsory process to obtain information from, or records of, members of the news media acting within the scope of newsgathering activities.” DOJ explained in the memorandum:
o 1. The Department of Justice will no longer use compulsory legal process for the purpose of obtaining information from or records of members of the news media acting within the scope of newsgathering activities, as set out below.
o 2. This new prohibition applies to compulsory legal process issued to reporters directly, to their publishers or employers, and to third-party service providers of any of the foregoing. It extends to the full range of compulsory process covered by the current regulations, specifically, subpoenas, warrants, court orders issued pursuant to 18 U.S.C. § 2703(d) and§ 3123, and civil investigative demands. Further, it applies regardless of whether the compulsory legal process seeks testimony, physical documents, telephone toll records, metadata, or digital content.
o 3. As with the current regulations, this prohibition on compulsory process does not apply to obtaining information from or records of a member of the news media who is the subject or target of an investigation when that status is not based on or within the scope of newsgathering activities.
§ “[L]aw enforcement and judicial authorities in Europe, the US and Canada have seized the web domains and server infrastructure of DoubleVPN….a virtual private network (VPN) service which provided a safe haven for cybercriminals to attack their victims” per a Europol press release. Europol added:
o This coordinated takedown, led by the Dutch National Police (Politie), under jurisdiction of the National Public Prosecutor’s Office (Landelijk Parket), with international activity coordinated by Europol and Eurojust, has now ended the availability of this service.
o Servers were seized across the world where DoubleVPN had hosted content, and the web domains were replaced with a law enforcement splash page. This coordinated takedown was carried out in the framework of the European Multidisciplinary Platform Against Criminal Threats (EMPACT).
o DoubleVPN was heavily advertised on both Russian and English-speaking underground cybercrime forums as a means to mask the location and identities of ransomware operators and phishing fraudsters. The service claimed to provide a high level of anonymity by offering single, double, triple and even quadruple VPN-connections to its clients.
§ The Government Accountability Office (GAO) issued a report titled “An Accountability Framework for Federal Agencies and Other Entities” “to identify key practices to help ensure accountability and responsible AI use by federal agencies and other entities involved in the design, development, deployment, and continuous monitoring of AI systems.”
o To develop this framework, GAO convened a Comptroller General Forum with AI experts from across the federal government, industry, and nonprofit sectors. It also conducted an extensive literature review and obtained independent validation of key practices from program officials and subject matter experts. In addition, GAO interviewed AI subject matter experts representing industry, state audit associations, nonprofit entities, and other organizations, as well as officials from federal agencies and Offices of Inspector General.
o The GAO pointed to four components of an AI framework:
§ Governance. To help entities promote accountability and responsible use of AI systems, GAO
identified key practices for establishing governance structures and processes to
manage, operate, and oversee the implementation of these systems.
· Governance at the Organizational Level
· 1.1 Clear goals: Define clear goals and objectives for the AI system to ensure intended
outcomes are achieved.
· 1.2 Roles and responsibilities: Define clear roles, responsibilities, and delegation of authority for the AI system to ensure effective operations, timely corrections, and sustained oversight.
· 1.3 Values: Demonstrate a commitment to values and principles established by the entity to foster public trust in responsible use of the AI system.
· 1.4 Workforce: Recruit, develop, and retain personnel with multidisciplinary skills and
experiences in design, development, deployment, assessment, and monitoring of AI
· 1.5 Stakeholder involvement: Include diverse perspectives from a community of
stakeholders throughout the AI life cycle to mitigate risks.
· 1.6 Risk management: Implement an AI-specific risk management plan to systematically
identify, analyze, and mitigate risks.
· Governance at the Systems Level
· 1.7 Specifications: Establish and document technical specifications to ensure the AI system meets its intended purpose.
· 1.8 Compliance: Ensure the AI system complies with relevant laws, regulations, standards, and guidance.
· 1.9 Transparency: Promote transparency by enabling external stakeholders to access information on the design, operation, and limitations of the AI system.
§ Data. To help entities use data that are appropriate for the intended use of each AI
system, GAO identified key practices to ensure data are of high quality, reliable,
· Data used for Model Development
· 2.1 Sources: Document sources and origins of data used to develop the models
underpinning the AI system.
· 2.2 Reliability: Assess reliability of data used to develop the models.
· 2.3 Categorization: Assess attributes used to categorize data.
· 2.4 Variable selection: Assess data variables used in the AI component models.
· 2.5 Enhancement: Assess the use of synthetic, imputed, and/or augmented data.
· Data Used for System Operation
· 2.6 Dependency: Assess interconnectivities and dependencies of data streams that
operationalize the AI system.
· 2.7 Bias: Assess reliability, quality, and representativeness of all the data used in the
system’s operation, including any potential biases, inequities, and other societal
concerns associated with the AI system’s data.
· 2.8 Security and privacy: Assess data security and privacy for the AI system.
§ Performance. To help entities ensure AI systems produce results that are consistent with program objectives, GAO identified key practices for ensuring that systems meets their
· Performance at the Component Level
· 3.1 Documentation: Catalog model and non-model components, along with operating
specifications and parameters.
· 3.2 Metrics: Define performance metrics that are precise, consistent, and reproducible.
· 3.3 Assessment: Assess the performance of each component against defined metrics to ensure it functions as intended and is consistent with program goals and objectives.
· 3.4 Outputs: Assess whether outputs of each component are appropriate for the operational context of the AI system.
· Performance at the System-Level
· 3.5 Documentation: Document the methods for assessment, performance metrics, and
outcomes of the AI system to provide transparency over its performance.
· 3.6 Metrics: Define performance metrics that are precise, consistent, and reproducible.
· 3.7 Assessment: Assess performance against defined metrics to ensure the AI system
functions as intended and is sufficiently robust. 3.8 Bias: Identify potential biases, inequities, and other societal concerns resulting from the AI system.
· 3.9 Human supervision: Define and develop procedures for human supervision of the AI system to ensure accountability.
§ Monitoring. To help entities ensure reliability and relevance of AI systems over time, GAO
identified key practices for monitoring performance and assessing sustainment and
· Continuous Monitoring of Performance
· 4.1 Planning: Develop plans for continuous or routine monitoring of the AI system to ensure it performs as intended.
· 4.2 Drift: Establish the range of data and model drift that is acceptable to ensure the AI system produces desired results.
· 4.3 Traceability: Document results of monitoring activities and any corrective actions taken to promote traceability and transparency.
· Assessing Sustainment and Expanded Use
· 4.4 Ongoing assessment: Assess the utility of the AI system to ensure its relevance to the current context.
· 4.5 Scaling: Identify conditions, if any, under which the AI system may be scaled or expanded beyond its current use.
o In an increasingly online world, the concept of privacy must be broadened to protect an individual's privacy both online and offline. The Uniform Personal Data Protection Act provides a reasonable level of consumer protection without incurring the compliance and regulatory costs associated with some existing state regimes. The Act recognizes that the collection and use of personal data are important features of our modern economy but raise significant issues of privacy and control. The Act outlines compatible, incompatible, and prohibited data practices and provides an enforcement mechanism to ensure compliance with the Act.
§ “Instagram makes under-16s' accounts private by default” — BBC. Instagram has made new under-16s' accounts private by default so only approved followers can see posts and "like" or comment.Tests showed only one in five opted for a public account when the private setting was the default, it said. And existing account holders would be sent a notification "highlighting the benefits" of switching to private. But Instagram also said it was pushing ahead with new apps for under-13s, despite a backlash from some groups.
§ “Majority of Covid misinformation came from 12 people, report finds” By Erum Salam — The Guardian. The vast majority of Covid-19 anti-vaccine misinformation and conspiracy theories originated from just 12 people, a report by the Center for Countering Digital Hate (CCDH) cited by the White House this week found. CCDH, a UK/US non-profit and non-governmental organization, found in March that these 12 online personalities they dubbed the “disinformation dozen” have a combined following of 59 million people across multiple social media platforms, with Facebook having the largest impact. CCDH analyzed 812,000 Facebook posts and tweets and found 65% came from the disinformation dozen. Vivek Murthy, US surgeon general, and Joe Biden focused on misinformation around vaccines this week as a driving force of the virus spreading.
§ “Tech Firms to Buy Covid-19 Vaccines on Behalf of Taiwan’s Government” By Joyu Wang and Chao Deng — The Wall Street Journal. Taiwan is struggling with the pandemic, short on vaccines and locked in a geopolitical fight with China over access to the shot BioNTech SE co-developed with Pfizer Inc. Now, in a twist, two of the world’s most important technology players—who also happen to be Taiwan’s two best-known homegrown companies—are stepping in to buy millions of BioNTech doses on behalf of the Taiwanese government. BioNTech and Shanghai Fosun Pharmaceutical Co. , which is licensed to distribute BioNTech’s vaccine in China, said in a joint statement Sunday that they would sell 10 million shots to Taiwan Semiconductor Manufacturing Co. , Hon Hai Precision Industry Co. —the iPhone assembler better known as Foxconn—and a private charity controlled by Foxconn founder Terry Gou.
§ “U.S. and E.U. security officials wary of NSO links to Israeli intelligence” By Shane Harris and Souad Mekhennet — The Washington Post. The Israeli company NSO Group has earned a reputation among national security experts around the world as a best-in-class manufacturer of surveillance technology capable of secretly gathering information from a target’s phone. But U.S. and European security officials regard the company with a degree of suspicion despite the ability of its technology to help combat terrorists and violent criminals. In interviews, several current and former officials said they presumed that the company, which was founded by former Israeli intelligence officers, provides at least some information to the government in Jerusalem about who is using its spying products and what information they’re collecting.
§ “How Washington power brokers gained from NSO’s spyware ambitions” By Drew Harwell — The Washington Post. The Israeli surveillance giant NSO Group and companies linked to it or its founders have spent millions of dollars in hopes of wooing their way into the U.S. market, hosting demonstrations for government intelligence officials and hiring Washington’s most prominent names despite pledges that its phone-hacking tool can’t be used inside the United States. The company’s attempts to secure U.S. contracts appear to have been unsuccessful, with federal and local law enforcement agency representatives saying in emails and interviews that they balked at its Pegasus spyware tool’s million-dollar price tag.
§ “A princess raced to escape Dubai’s powerful ruler. Then her phone appeared on the list.” By Drew Harwell — The Washington Post. The princess had been careful, so she left her phone in the cafe’s bathroom. She’d seen what her father could do to women who tried to escape. She hid in the trunk ofa black Audi Q7, then jumped into a Jeep Wrangler as her getaway crew raced that morning from the glittering skyscrapers of Dubai to the rough waves of the Arabian Sea. They launched a dinghy from a beach in neighboring Oman, then, 16 miles out, switched to water scooters. By sunset they’d reached their idling yacht, the Nostromo, and began sailing toward the Sri Lankan coast.
§ “How Mexico’s traditional political espionage went high-tech” By Mary Beth Sheridan — The Washington Post. In 2017, investigators discovered traces of Pegasus spyware on the phones of several Mexican journalists and civic activists. The government acknowledged it had usedPegasus — but only, officials said, to fight criminals. Amid the backlash, the Justice Ministry stopped using the surveillance tool. Four years later, Pegasus has become the most prominent symbol of an explosion of high-tech political spying in Mexico. And yet the mystery around its use has only deepened. The Justice Ministry told a government watchdog agency in 2019 that it had uninstalled the spyware licensed by the Israeli-based NSO Group— but it had no records of how or when, or what happened to any data collected. Mexican federal prosecutors are investigating alleged abuse of the hacking tool.
§ “The spyware is sold to governments to fight terrorism. In India, it was used to hack journalists and others.” By Joanna Slater and Niha Masih — The Washington Post. A powerful surveillance tool licensed only to governments was used to infiltrate mobile phones belonging to at least seven people in India and was active on some of their devices as recently as this month. The hacks — confirmed by forensic analysis of the phones — represent a tiny fraction of what may be a vast surveillance net, intensifying concerns about the erosion of civil liberties in India under Prime Minister Narendra Modi.
§ “Indian activists jailed on terrorism charges were on list with surveillance targets” By Joanna Slater and Niha Masih — The Washington Post. When the Indian authorities began arresting lawyers and human rights activists in 2018, Sudha Bharadwaj did what she had done for more than three decades wherever she saw injustice. She organized. She spoke out. She asked courts to uphold the law. Later that year, the police arrested her, too. Unknown to Bharadwaj, a lawyer and trade unionist, her phone number was already on a list that included some selected for surveillance by clients of NSO Group, an Israeli firm. So was the phone of a lawyer representing her. So was the phone of a close friend and human rights lawyer, and later the phone of another lawyer Bharadwaj worked with in a civil liberties group.
§ “In Orban’s Hungary, spyware was used to monitor journalists and others who might challenge the government” By Michael Birnbaum, Andras Petho, and Jean-Baptiste Chastand — The Washington Post. In communist-era Hungary, citizens were recruited to spy on their neighbors and report any potential threats to the secret police. In the Hungary of Prime Minister Viktor Orban, a spyware tool has been deployed to similar effect, monitoringpeople with technology that can turn smartphones into troves of information. More than 300 Hungarian phone numbers — connected to journalists, lawyers, business titans and activists, among others — appeared on a list that included numbers selected for surveillance by clients of NSO Group, an Israeli security company.
§ “Edward Snowden calls for spyware trade ban amid Pegasus revelations” By David Pegg and Paul Lewis — The Guardian. Governments must impose a global moratorium on the international spyware trade or face a world in which no mobile phone is safe from state-sponsored hackers, Edward Snowden has warned in the wake of revelations about the clients of NSO Group. Snowden, who in 2013 blew the whistle on the secret mass surveillance programmes of the US National Security Agency, described for-profit malware developers as “an industry that should not exist”.
§ 5 August
o The Federal Communications Commission (FCC) will hold its monthly open meeting with this tentative agenda:
§ Establishing Two New Innovation Zones. The Commission will consider a Public Notice that would create two new Innovation Zones for Program Experimental Licenses and the expansion of an existing Innovation Zone. (ET Docket No. 19-257)
§ Numbering Policies for Modern Communications. The Commission will consider a Further Notice of Proposed Rulemaking to update the Commission’s rules regarding direct access to numbers by interconnected Voice over Internet Protocol providers to safeguard the nation’s finite numbering resources, curb illegal robocalls, protect national security, and further promote public safety. (WC Docket Nos. 13-97, 07-243, 20-67; IB Docket No. 16-155)
§ Appeals of the STIR/SHAKEN Governance Authority Token Revocation Decisions. The Commission will consider a Report and Order that would establish a process for the Commission to review decisions of the private STIR/SHAKEN Governance Authority that would have the effect of placing voice service providers out of compliance with the Commission’s STIR/SHAKEN implementation rules. (WC Docket Nos. 17-97, 21-291)
§ Modernizing Telecommunications Relay Service (TRS) Compensation. The Commission will consider a Notice of Proposed Rulemaking on TRS Fund compensation methodology for IP Relay service. (CG Docket No. 03-123; RM-11820)
§ Updating Outmoded Political Programming and Record-Keeping Rules. The Commission will consider a Notice of Proposed Rulemaking to update outmoded political programming rules. (MB Docket No. 21-293)
§ Review of the Commission’s Part 95 Personal Radio Services Rules. The Commission will consider a Memorandum Opinion and Order on Reconsideration that would grant three petitions for reconsideration of the Commission’s May 2017 Part 95 Personal Radio Services Rules Report and Order. (WT Docket No. 10-119)
§ 1 September
o The House Armed Services Committee will mark up the FY 2022 National Defense Authorization Act (H.R.4395).