artificial intelligence

Massachusetts lawmakers approve $4B for major initiatives in life sciences, climatetech, and AI

On Nov. 14, the Massachusetts’ House and Senate approved a compromise $4 billion economic development measure after months of negotiations that followed the end of their formal legislative session this past summer and the spring release of Gov. Maura Healey’s $3.5 billion proposal, An Act relative to strengthening Massachusetts’ economic leadership, or the Mass Leads Act.  The Mass Leads Act sought to reauthorize the state’s life sciences investments at $1 billion for the next decade, launch a separate $1 billion, 10-year climatetech initiative, and build on the momentum of the state's CHIPS + Science wins by proposing targeted investments in advanced manufacturing and robotics. It also included $100 million to create an Applied AI Hub in Massachusetts.

White House memo aims to kickstart AI, particularly in areas of national security

A new White House national security memo (NSM) builds on last year’s Executive Order on AI and calls for the U.S. government to act quickly to the use of AI capabilities in service of national security. It also specifies actions to improve the security and diversity of chip supply chains, among other directives.

Artificial intelligence and the US labor market

Artificial intelligence (AI) is already well integrated into the American workforce; in 2022, 19% of American workers were in jobs identified as most exposed to AI, compared to 23% in the least exposed jobs, according to a study by Pew Research. Jobs identified as most exposed are those in which the most critical responsibilities can either be replaced or assisted by AI. In contrast, the least exposed jobs cannot currently be replaced or assisted. A recent study identified U.S. cities at risk of losing jobs to AI, finding more than 10 million jobs at-risk within those cities.

Biden Administration releases executive order regarding future of AI in the US including specific directions for DOE, NSF, DOC and SBA

The Biden Administration issued an executive order earlier this week that provides guidance on the safe, secure, and trustworthy development and use of Artificial Intelligence (AI) in the U.S. The EO includes guidance for agencies to work to provide new opportunities for small businesses and entrepreneurs in AI and other directives.

How State Policymakers and Governors Are Shaping AI

In the absence of cohesive federal policies or regulations involving the growing development and use of artificial intelligence (AI), states’ governors and lawmakers are undertaking studies and crafting legislation that seeks to balance governance and implementation of this evolving technology. The studies and legislation are intended to protect constituents from AI’s possible harms without hindering potential uses or contributions of AI to government services or medical, science, business, and educational advancements.

White House R&D priorities include new focus on regional innovation; other priorities slightly shift

A memo sent out last week by the Office of Management and Budget and the Office of Science and Technology Policy outlines this year’s R&D priorities. Federal science agencies will use this memo to design their budget requests for the fiscal year 2025.

AI giants pledge to ensure the technology’s safety, security, and trustworthiness

Representatives from leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI) gathered at the White House on July 21 for the announcement of their voluntary commitment to “help move toward safe, secure, and transparent development of AI technology.” According to a White House statement, the companies have made commitments to ensuring products are safe before introducing them to the public, building systems that put security first, and earn the public’s trust.

NIH puts the kibosh on generative AI

Last month, NIH came out with a policy statement that prohibits using generative AI to analyze or critique NIH grant applications and contract proposals. Specifically, as written in NIH Notice NOT-OD-23-149, “NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals.” The problem with using generative AI in peer review is that it compromises confidentiality. As expressed in the notice, once information is loaded onto a generative AI platform, “AI tools have no guarantee of where data are being sent, saved, viewed, or used in the future, and thus NIH is revising its Confidentiality Agreements for Peer Reviewers to clarify that reviewers are prohibited from using AI tools in analyzing and critiquing NIH grant applications and R&D contract proposals. Such actions violate NIH’s peer review confidentiality requirements.”

NSF expands its advanced materials network with nine new centers

The National Science Foundation (NSF) is expanding a network of research centers across the country to translate university-based R&D into new, and hopefully, better advanced materials. In late June, NSF announced the distribution of $162 million to support the creation of nine more Materials Research Science & Engineering Centers (MRSECs), bringing the total number of centers to twenty. Each of the new centers will receive $18 million over six years.

Forecast predicts generative AI to make many white-collar workers blue

If a recent forecast from McKinsey & Company is correct, climate change isn’t the only rough ride ahead over the next decade for regional and national economies.

Pages

Subscribe to RSS - artificial intelligence