As previous comments make clear, "good idea" is very general--it may be a good idea from the point of view of cost effectiveness or reducing military casualties, but a bad idea other points of view--misdirected fire, perhaps, muddy lines of responsibility for lethal actions, international equality (and hence, maybe stability? Might 'non-robotized' nations, feeling unable to compete upon the battlefield, opt for other forms of conflict, like terrorism, or the development of weapons of mass destruction?)
That said, I think that AI is really, really, hard, and though there have been some remarkable breakthroughs--the cyber-Jeopardy champion is an amazing example of AI coping pretty well with natural language, which has been a really tough challenge--it's likely that robotic soldiers will be remotely operated for quite some time to com.
There the change has come very quickly; the US is said to deploy something like 7,000 UAVs (remotely piloted planes) these days. And they are being rapidly up-armed, becoming more and more deadly fast. If Qaddafi had had those this week, things might have been different; some of his fighter pilots, ordered to bomb Libyan oil fields controlled by the opposition, defected to Malta instead.
Had they been jockeying UAVs, he'd have shot them as soon as their disobedience became clear.