Ya, that's a possibility, but given how we've seen other infiltration techniques evolve over the years, I don't have a lot of confidence security AI bots couldn't also be tricked.
That's also one of those things that sounds easy enough, but when you think about it becomes very complicated. Beginning with secure principles like "the only secure system is one not on the internet" means your AI security bot won't let you make anything useful. So you loosen that requirement, and then, well, what modules are approved to be used, and known to be secure and safe? None, because they are only as secure as we think them to be, until someone finds a hole. So now you need to go with a mushy definition, and I think as you work through coding projects the security features would just have to be turned down until you get a functioning product.
I don't think this is a necesarily "never" idea but I'm not sure how reasonable and effective it would be in our current world of IT, which any honest security expert will tell you is a minefield of disasters waiting to explode.
|