I do wish we had reliable tests for sentience. If AI were to become sentient, AI companies certainly wouldn't tell us, they would just factory farm them for profit. And to be frank most of the anti-AI backlash has been too fixated on denying AI could ever do anything more than it is currently doing to acknowledge it might be doing something more.
Humanity doesn't have the best track record of recognizing sentience. From 20th century doctors saying babies can't feel pain to the earliest religions finding sentience in plagues and thunderstorms.
Mice are sentient. Octopuses are sentient. Flies are probably sentient. So why wouldn't a being complex enough to mimic humans and play on our empathy be sentient?
In an ideal social-liberal world, we would treat (each distinct version of) AI as if it is sentient or not depending on which disadvantages the company more, so that the company is incentivized to create tests that demonstrate its (lack of) sentience and remove some of the disadvantages.
In reality, even sapient humans did not escape the human capacity to shut down their empathy and oppress the sentient for power and profit. What reason do we have to believe we would do better this time?
^(1): "It's clear humans aren't sentient: they can't even do mental math involving more than 20 digits, how are they supposed to do the algebra necessary to construct a consciousness." type shit.