Anthropic’s Account Suspension of OpenClaw Creator Sparks Debate on AI Platform Control

Anthropic’s Account Suspension of OpenClaw Creator Sparks Debate on AI Platform Control

Peter Steinberger, the developer behind the open-source tool OpenClaw, found his Anthropic account suspended early Friday morning. He shared a screenshot on X showing a message from Anthropic citing “suspicious” activity as the reason for the ban. “Yeah folks, it’s gonna be harder in the future to ensure OpenClaw still works with Anthropic models,” Steinberger posted alongside the image. The suspension was short-lived, with his account reinstated a few hours later after the post gained widespread attention.

Among the flood of responses to Steinberger’s post, an Anthropic engineer chimed in to clarify the company’s stance. The engineer stated that Anthropic has never banned anyone for using OpenClaw and offered assistance. It remains unclear whether this intervention directly led to the account restoration. Anthropic has not provided further details on the matter.

This incident occurred shortly after Anthropic announced a significant pricing adjustment. Last week, the company declared that subscriptions to its Claude AI model would no longer cover usage through “third-party harnesses including OpenClaw.” Users of such tools must now pay separately based on consumption via Claude’s API. Anthropic explained that subscription plans were not designed to accommodate the “usage patterns” associated with claws, which can be more compute-intensive due to features like continuous reasoning loops, automated task retries, and integration with external systems.

Steinberger expressed frustration with the new policy, claiming he was adhering to the updated rules by using the API but was banned regardless. He suggested a strategic motive behind the changes, posting, “Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source.” While not explicitly detailed, this comment likely references features added to Anthropic’s own agent, Cowork, such as Claude Dispatch, which enables remote agent control and task assignment. Dispatch launched a couple of weeks prior to the OpenClaw pricing revision.

The discussion on X delved into broader industry dynamics, particularly given Steinberger’s employment at OpenAI, a direct competitor to Anthropic. One user remarked, “You had the choice, but you went to the wrong one,” implying his job choice influenced the situation. Steinberger retorted, “One welcomed me, one sent legal threats,” highlighting past tensions with Anthropic.

When questioned about his continued use of Claude despite working for OpenAI, Steinberger clarified that his usage is strictly for testing purposes. He aims to ensure OpenClaw updates remain compatible with Claude, as it remains a popular model among OpenClaw users. “You need to separate two things,” he explained. “My work at the OpenClaw Foundation where we wanna make OpenClaw work great for *any* model provider, and my job at OpenAI to help them with future product strategy.”

Others noted that the reliance on Claude for testing underscores its prevalence in the OpenClaw community, often preferred over alternatives like ChatGPT. In response to queries about adapting to Anthropic’s pricing changes, Steinberger simply said, “Working on that,” hinting at his role at OpenAI involving strategic product development.

Steinberger did not respond to requests for additional comment. The episode sheds light on the evolving challenges of integrating third-party tools with proprietary AI platforms, where pricing shifts and account policies can spark conflicts between developers and service providers.

Sources & Further Reading

Related Posts