Well, I don’t worry for a few reasons:
a) they’re computers that require programmers.
b) any soul-like / soul-less like features can be programmed in by the programmers.
c) there’s society that doesn’t want soulless machines making moral decisions.
d) Therefore, people (legislation, the programmer’s bosses, whoever), will be sure that some sort of morality is built-in to the AI.
That’s why I don’t worry about it. People nervous about exactly what you’re talking about will be sure that it won’t happen.