The Software & Information Industry Association  (SIIA) joins 7 organizations to express our opposition to SB 1047 (Wiener). This bill aims to enact the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which requires frontier AI developers to determine the safety of covered models before initiating training, among other provisions. While sharing the goal of ensuring safe AI development, the organizations argue that the issue is best addressed at the federal level. SB 1047 would further complicate the already fragmented AI regulatory landscape in the U.S., creating inconsistencies with federal regulations and imposing vague and impractical requirements on developers. The bill’s focus on regulating AI technology rather than its high-risk applications, coupled with the uncertainty surrounding definitions and compliance, is seen as detrimental to economic and technological innovation. Concerns also arise regarding the bill’s unrealistic expectations for developers to certify model safety before training, ambiguous definitions of hazardous capabilities, and intrusive requirements on operators of computing clusters. Additionally, the establishment of a new regulatory body with broad jurisdiction and the imposition of harsh penalties are deemed excessive. The organizations argue that federal solutions are needed to ensure consistency in AI regulation nationally and align with ongoing efforts by federal agencies such as the National Institute of Standards and Technology. Ultimately, we believe that SB 1047 would hinder AI innovation in California and advocate for a national approach to AI regulation instead. Therefore, we oppose the bill and urge reconsideration.