Fog Creek Software
Discussion Board




Trust, 3rd parties, and shades of grey

One issue the computing community has been struggling with for some time is the issue of trusting code that is downloaded over the internet.  This is a difficult situation, since trust means different things to different people - or does it?

I think we can all agree that "no one" likes spyware ActiveX controls and we can mostly agree that (however you feel about flash) the flash player is mostly harmless.

The problem is that the end user - often a home user - is faced with the yes/no decision of whether to trust a 3rd party.  Code signing allows a company name to be displayed before making this decision, but this means little to the end user.

My question:  Is there room here for a 3rd party to make a value judgement about the safety of a particular piece of code?  i.e. You can accept or reject the code based on this evaluation or set your computer to ignore code that didn't "pass inspection".

The ugly truth is that many people spend a large fraction of their web time on a few major sites.  Sometimes these sites would benefit from a rich-client interface (Amazon?).  Code trust is holding this back.

I know this is like the porn debate; you can't define it, but you know it if you see it.  There has been success with manually blocking of porn sites, could there be some sort of "white filter" for downloaded code?

Any thoughts on this?  It seems like a minor problem, but it's holding back progress quite a bit...

Bill Carlson
Tuesday, August 24, 2004

Isn't that kind of what Verisign is?

I am Jack's Nirvana
Tuesday, August 24, 2004


Well, those of us who know would only approve select requests from a select group anyway (Sun, IBM, MS, maybe a few others).

The other users wouldn't have a clue anyway...

KC
Tuesday, August 24, 2004

"Any thoughts on this?  It seems like a minor problem, but it's holding back progress quite a bit..."

Probably the most trustwothy solution to this problem is sandboxes, such that Java has offered and .NET recently brought to the table: Let organizations make rich code, but code that has extremely limited rights on my machine - no file access outside of a limited scratch pad [unless I choose otherwise], no ability to talk to any site other than the souce, no ability to intercept keystrokes or to monitor other windows, etc.

Dennis Forbes
Tuesday, August 24, 2004

I like the Java sandbox model much better than the ActiveX all or nothing model. I believe C# has a very similar model to Java, and this is the direction things are headed with with Longhorn.

The ActiveX model is a disaster. You either trust the code and give it full reign or you don't run it at all. And even if you trust that the code isn't malicious, there is still the problem of well meaning but poorly written code. For example, some malicious site might use ActiveX controls signed by a trusted company, but they exploit a flaw in the control to gain control of your machine once you download it. Not only do you have to trust the company that signed the code is honest, you have to trust that they write good code too.

With the sandbox model you can give fine grained permission to the code running on your machine. For example, an app that sits in the system and alerts you when you have new mail on you google account really only needs network permissions to the google mail servers, which is possible in the sandbox model. Even if the code is flawed, it can't rise above its permissions to do malicious things (it can still do malicious things that fall within its permissions however).

The greatest hurdle with the sandbox model is how to resolve what the code should be trusted to do. Inside an enterprise its easy, the admin sets the rules. In home user land though it gets more difficult. You can ask the users, but 90% of the users aren't going to understand the consequences of granting various permissions and won't be bothered to learn. Perhaps your idea of white list filter will work in that case.

ronk!
Tuesday, August 24, 2004

Sandboxing is the obvious solution.  There's nothing to keep the app from popping up a Windows login dialog and capturing a password, although this would be a problem on an HTML page anyway.

It's nice to have a way to prevent people from closing your app inadvertantly by hitting close or the back button on a browser.

In some cases, realistically, you have a mix of unmanaged code, COM components, ActiveX controls, needing to call out to other IP addresses than the originator, etc.

Some apps might have a legit reason for needing more "scratch space" or memory or access to the file system.  Having grandma make these decisions isn't practical.

I was at a Microsoft event a while back and a speaker said about security: "there are two kinds of people, those who always say yes to a security dialog and those who always say no, and little in between."

Bill Carlson
Tuesday, August 24, 2004

"Sometimes these sites would benefit from a rich-client interface"

You take your rich-client internet and stick it.  It works just fine, as it is. 

Malloc
Tuesday, August 24, 2004

Just in relation to trust and ActiveX controls, I remember waay back in 1997 people talking about how much of a disaster in waiting (downloadable) ActiveX controls would be, particularly the security problems they would present for web surfers.

Needless to say, the prognosticators were pretty much bang on the money since so many PC related problems these days are the result of morons inadvertently installing an ActiveX porn-site trojan.

TheGeezer
Tuesday, August 24, 2004

> I think we can all agree that "no one" likes spyware ActiveX controls and we can mostly agree that (however you feel about flash) the flash player is mostly harmless.

No, we can't all agree on that.  The people who make money off of spyware like it just fine.  And the flash player is harmless by design, and incredibly dangerous in actuality, because any buffer overrun is a total loss of control.

> The problem is that the end user - often a home user - is faced with the yes/no decision of whether to trust a 3rd party.

More specifically, the problem is that the end user is faced with a "retail" decision, rather than a "wholesale" decision.  Dialog boxes are the problem.  Dialog boxes are the thing that stands between you and the shiny thing you clicked on, and everyone gets rid of them by clicking OK.

Security decisions must be made at the wholesale level -- my security policy for this machine is such and such, the machine enforces the policy for me, and never asks me for a security decision again. 

The trick then is to come up with a reasonable out-of-the-box policy which will keep people safe, productive and happy without having to tweak their security settings.  That's hard, but that's one of the aims of the .NET security system.

Eric Lippert
Tuesday, August 24, 2004

Eric,

Thanks for chiming in.  I think the Windows Forms security model is great.  Very granular with isolated storage, prompted file access, etc.

What worries me is that developers will take longer to develop apps that may have reduced functionality - in order to fit within a security model.

Flash was a bad example on my part, as it subjects complex unmanaged code to arbitrary input from the internet.

In our case, we're developing a Windows Forms app.  To get around a grid bug, we had to resort to ugly reflection code in a couple cases.  This automatically means "full trust".  We also use a couple unmanaged DLLs for govt. forms printing.

What we do is simple and boring accounting type stuff.  We don't listen on any ports, etc.  We're very "white-listable".

As far as the blatent spyware/trojan stuff, I think there's room for a human judgement call there.  Many ISVs would gladly submit to a more rigorous Verisign check in order to not be blocked by a casual users PC.

Will we have this problem in 10 years?  I don't think so.  The pure .NET model is great.  Until then, we're faced with the issue of mixed (i.e. "full trust") applications.

Bottom line:  Human judgement _may_ be necessary so that a casual user can easily do what they need to do without being bothered by signed, but obviously malicious code.

As developers, we're predisposed to dislike anything involving shades of grey.  It may be in everyone's best interest in this case, though.  Spam and porn filters are arbitrary and work most of the time.  Most of the time would be better than what we have now.

Bill Carlson
Tuesday, August 24, 2004

>Isn't that kind of what Verisign is?
Nope. They expressly disavow any liability or responsibility for any of the certificates they sign. All their certs are good for is to get a name of someone you can sue.

Peter
Tuesday, August 24, 2004

>The ugly truth is that many people spend a large fraction >of their web time on a few major sites.

I think this is dramatically not true. 

It's the view that journalists and the current mainstream media take.
The internet is just another 'channel' to get 'content' to the public.  And the public will chose the 'content providers' they are most comfortable with.

Just like television channels. 

This isn't how people spend time on the web.

braid_ged
Wednesday, August 25, 2004

> those of us who know would only approve select requests from a select group anyway (Sun, IBM, MS, maybe a few others).

You mean I'm alone in getting nervous every time it asks me "Always trust stuff from Microsoft?"


Wednesday, August 25, 2004

> My question:  Is there room here for a 3rd party to make a value judgement about the safety of a particular piece of code?  i.e. You can accept or reject the code based on this evaluation or set your computer to ignore code that didn't "pass inspection".

As has been pointed out, even if the app is benign, it can be a security risk if it is poorly written.  So this 3rd party has it's work cut out for it.  In addition, who is this third party, what is their motivation, and what is their business model?  Who pays for this security evaluation, and what is their liability?

Another way to think about it is that the JVM vendors already do this.  They certify that anything written in Java will be properly sandboxed.  They provide the platform, and make sure the platform is secure.  This means that there is no per-app auditing/certifying.

Brian
Wednesday, August 25, 2004

"What worries me is that developers will take longer to develop apps that may have reduced functionality - in order to fit within a security model."

Yes, let's keep quick releases of featureful products ahead of security like we have in th past.  No harm there.  Right?  Do you work for Microsoft.  Maybe you need Philo's disclaimer.

Throwing my arms up.
Wednesday, August 25, 2004

-----">The ugly truth is that many people spend a large fraction >of their web time on a few major sites.

I think this is dramatically not true. "-----

Really? I spend 90% of my time on the web (three or four hours a day) on about seven sites. Unless I'm looking for specific information I won't go anywhere else unless I've gone through all seven and still have time to kill.

Stephen Jones
Wednesday, August 25, 2004

Sometimes one gets discouraged at these forums.  I'm not trying to offer a solution with perfect military grade security.  I'm trying to allow "most" people to run "most" apps "most" of the time, while eliminating "most" junk that "most" people find useless.  This is a noble business and humanitarian goal.

Code trust and rich-client execution is an honest problem, with no perfect solution.  It would help to get more responses like Eric's and less "your solution sucks, Bill, because it's arbitrary."

Bill Carlson
Wednesday, August 25, 2004

> I'm not trying to offer a solution with perfect military grade security.  I'm trying to allow "most" people to run "most" apps "most" of the time, while eliminating "most" junk that "most" people find useless.  This is a noble business and humanitarian goal.

But unless you set your sights a little higher, we probably aren't going to get anywhere.  What you describe is pretty much what we have today: you get to decide, using your best judgment, if you want to donwload and install software from XYZ corp.  If the code itself hasn't been audited by a third party or doesn't run on top of a trusted platform/sandbox, all you have to go on is their reputation.  Who knows better than you if you trust XYZ corp?

Ok, so you could automate this somewhat, keep a clearinghouse of "trusted" (by whom?) sources, but how different is that from where we are now?  And you've just changed the problem to "how do I manage a communal list of trusted (whatever that means) companies".  If my company gets tagged as untrusted, how do I get it off?  See the spam blacklisting wars if you think there aren't problems with this approach.

Eric's solution is what I've called the trusted platform.  You can put all the Java applets you want on your page.  I'm down with that.  In fact, I wish people would replace their Flash crap with Java (pipe dream).  I trust the JVM to enforce reasonable behavior.  It's a hard model to extend beyond to the desktop, because if I download an app, I almost certainly want to give it more permissions.  But for making surfing functional and safe, it's great.

Brian
Thursday, August 26, 2004

> I'm trying to allow "most" people to run "most" apps "most" of the time, while eliminating "most" junk that "most" people find useless.  This is a noble business and humanitarian goal

True dat. Except it won't work. As soon as your solution becomes commonplace those who delight in writing the junk will adapt their stuff to get around it (since you want to only block "most" of the problem).


Thursday, August 26, 2004

I agree that it's an uphill battle.  The long term solution is sandboxing.  But for right now, how about an IE option to check a blacklist, maintained by a 3rdParty, when loading an ActiveX control or browser add-in.  Yes, it isn't perfect, and yes, you'd be trusting someone else's judgement, but it would keep me from blowing an evening fixing my girlfriend's mom's computer after she clicked on a "shiny thing".

Bill Carlson
Thursday, August 26, 2004

*  Recent Topics

*  Fog Creek Home