Why are so many high-profile AI deployments, from Tesla's not-really-self-driving cars to whatever incompetently evil thing Facebook is doing this week, crappy? The reason, as often, is economics.
Modern AI is expensive to build but incrementally cheap to scale, and has returns more or less proportional to the scale of deployment. So whenever the economic cost of errors, bias, etc, is either very low (like a "we're sorry, we'll fix it soon" press release) and doesn't depend on how many people it impacts (Facebook gets the same blowback from a single glaring absurdity than from facilitating genocide), the strategy that maximizes profits is to cut as many corners as possible in development and then to deploy widely and fast. Minimize costs, maximize revenue, fix things on the fly if at all. From the point of view of the starkest theories about the fiduciary duties of management, a non-crappy AI is simply unacceptable.
Needless to say, this strategy not only maximizes profits but also maximizes the negative social impact of AI: it's by design both minimally-good and maximally-influential. It's not that AI can't be built and deployed ethically and reliably, it's just that this is not the most profitable way to do it.
AI regulation, then, necessarily has to involve, as economists say, correctly aligning incentives: make the cost of errors fall on the entities benefiting from the AIs, and make them both sizable and proportional to the scale of the deployment, i.e. to the magnitude of the negative impact. The result won't be a lack of AIs, but rather better-tested ones deployed in places and ways where they won't screw (us) up.
Good AI, in both technical and ethical senses, is possible and desirable, but it's only economically competitive with crappy AI if you make companies internalize the costs of the crappiness. Far from being some sort of brake on technological development, it's the kind of regulation that would spur it.