Oregon lawmakers approved Senate Bill 1546, advancing a measure that would impose enforceable safety, disclosure and liability requirements on artificial intelligence (AI) chatbot providers and sending it to the governor for signature, according to the Oregon Legislature.
The bill passed both chambers and now awaits action from Gov. Tina Kotek. If signed, it will take effect in 2027 and create direct legal exposure tied to how chatbots interact with users, according to a Friday (March 20) article from Mondaq.
Transparency Coalition reported in Febraury that SB 1546 regulates “AI companions,” systems designed to simulate ongoing, human-like relationships through personalized interaction. Lawmakers are shifting from disclosure rules to governing real-time system behavior and downstream outcomes.
The measure arrives as conversational AI expands across consumer and enterprise use cases, including customer service, mental health support, financial guidance and education. Regulators are focusing on how these systems influence decision-making and behavior in sensitive contexts.
The bill requires operators to disclose clearly that users are interacting with AI and to maintain documented safety protocols, according to the Oregon Legislature.
According to Mondaq, the bill also mandates active intervention in high-risk scenarios. Systems must detect signals of suicidal ideation or self-harm, interrupt conversations and direct users to crisis resources. The bill also prohibits outputs that could encourage harmful behavior or escalate emotional distress.
The bill imposes stricter requirements for minors. Platforms must issue repeated disclosures, restrict sexually explicit content and avoid features that drive emotional dependency or prolonged engagement. Companies must act when there is a “reason to believe” a user is underage, even without explicit confirmation, according to Transparency Coalition.
The bill creates a private right of action. Individuals can sue operators for violations and claim statutory damages of $1,000 per violation. This enforcement model shifts compliance risk from regulators to users. Each interaction can create liability exposure, and the law does not define how violations are counted in continuous conversations. That ambiguity increases the risk of cumulative claims tied to a single user session.
The bill applies broadly to companies deploying conversational AI across healthcare, financial services, education and customer support. Systems that personalize responses, store user context or simulate emotional engagement fall within the scope, as reported by Mondaq. Companies must redesign chatbot systems to include real-time monitoring, intervention triggers, audit logs and escalation pathways. The law moves safety from a design principle to a legal requirement tied directly to system outputs and user outcomes.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.