The US firm fears that its tool could tell someone how to make chemical or radioactive weapons and is seeking a specialist to ensure that its guardrails are robust.
In its LinkedIn recruitment post, Anthropic says that applicants should have a minimum of five years experience in "chemical weapons and/or explosives defence" as well as knowledge of "radiological dispersal devices" - also known as dirty bombs.
The company told the BBC that the role is similar to other jobs that it has already created.
Anthropic is not the only firm deploying this strategy as a similar vacancy has been advertised by the ChatGPT developer OpenAI.
Its careers website features a vacancy for a researcher in "biological and chemical risks" with a salary of up to $455,000, almost twice as much as offered by Anthropic.
Some experts have expressed concern about this approach as they fear it gives AI tools information about weapons even if they have been instructed not to use it.
Dr Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV programme, said: "Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?
"There is no international treaty or other regulation for this type of work and the use of AI with these types of weapons. All of this is happening out of sight."
The AI industry has frequently warned about the existential threats posed by the tech but little attempts has been made to slow its progress.
The issue has become increasingly important as the US Government has called on AI firms for support while launching a war in Iran.