embryo wrote:
Essentially, it is about our inability to trust a software, that isn't made by ourself. Can we create a software, that provides enough information for us to be sure that such software can be trusted?
I don't think it's possible. On one hand, the average user won't be able to make much sense of detailed technical info. So, they would depend on someone else, which shifts the trust to trust to people. And ultimately it's the people with whom we have trust issues in the first place. It's people who violate other people's trust either directly or using technology. The concept of trust doesn't exist in inanimate world. It's when you add people (or animals) to the world of things, things (pun intended) become less trustworthy than they would without people.
Also, there's a chance that the hardware can communicate without the user knowing about it. Again, does the user have to trust the entire chain of the people involved in design, manufacturing and delivery of the computer and its software or some expert who can examine the assembled system and tell whether or not it's bugged?
On the other hand, if we can't solve even the halting problem (except for very trivial and practically rather uninteresting cases) and the AI is still something like ~50 years ahead of us (just as it seemed 50 years ago), what can we expect of software being able to reason about itself and the hardware it's running on?
The best you can do is cover the basics and get some kind of heuristic. Which still leaves a chance for error and a room for exploitation. Better than nothing, but not perfect. And very bad if something very important and/or expensive is at stake.
Perhaps, you should consider a completely different question here... What do we do to address the causes of people stealing from other people?