I wouldn't really consider HAL evil either, so much as just driven by pure logic. You could argue that it's evil from the perspective of everyone else, but from HAL's standpoint there's no emotions involved with its motives. It is driven both by pure logic and its goal to complete the mission, and will do everything possible to see it completed. It logically deduced that human failing would result in the mission being unsuccessful, and took an extreme, but logical course to resolve it, kill the humans.
I would consider HAL to be far worse than just evil, since at least with an evil human, there's a chance at being able to reason with them, but with HAL, there's no reasoning with it whatsoever.
I would say the first major play/movie with self-aware robots/computers would be R.U.R. It started as a play in 1921, then as a film adaptation in 1935. It's responsible for the introduction of the word "Robot" in the English language.
HAL is the most memorable from 2001: A Space Odyssey. If I remember correctly, HAL's evil was caused by the fact that it had a glitch but would not acknowledge it and the humans were going to shut it down. So it decided to kill them to prevent being shut down and continue the mission. Seems like an "emotional" reaction for the sake of self-preservation to me; which is not necessarily logical when it involves murdering people. Then again, its been a while since I have seen it, so I could be wrong.