Fog Creek Software
Discussion Board

C/C++ portability wisdom?

When writing system-level (non-gui) software in C/C++, what are your experiences with OS portability libraries/layers?

I'm starting a new project at work, and I've been thinking about possible target systems: Windows, Linux, and Solaris.  If I'd like, priorities do permit me to focus on a Windows version first, and then port to the others in the future.  Is this the best approach, seeing as how I can take advantage of the features of each OS?  They DO pay me to write code anyway.  Or am I better off writing a thin, OS portable layer to develop on top of?  Does a generic layer supress too many powerful abilities that might be specific to an OS (Solaris threads, Windows overlapped i/o, etc)?

This is what I see so far:

Portability layer
+ easy port to another OS
- no use of unique, high performance features

Port code by hand
- difficult port to another OS
+ high performance
+ easier to read for other developers

Looking for theory and experiences from the pros.  Thanks in advance.

(btw - 3rd party libraries are not an option, management has NIHS.  No alternate language suggestions please.)

Jeff Crosby
Monday, June 28, 2004

Consider using the ACE at

Christopher Wells
Monday, June 28, 2004

Just write the software for 'A' platform, then when you have customers who want the product, then port it. Dont worry about how to get to 'N' platform now. That is unless you have customers for all platforms today !

Monday, June 28, 2004

I agree with James. Our project was started by a guy who decided portability was paramount. An entire abstraction layer was written, using many man-hours. 6 years later, it has never been ported, and we have pulled the abstraction layer out bit by bit to save time and simplifty things.

If you *know* you will port it, then maybe some effort is warranted to avoid highly platform specific stuff.


Monday, June 28, 2004

Ace is a worth a look.  It is a bit complex.

I have had the same code work on many
real-time OSs, windows, and solaris.  So it can be done.
And it's well worth it.

Several points:
1. win32 is truly gross. And windows isn't really posix compliant.
2. thread scheduling varies a great deal across oses as do many fundamental characteristics like memory usage.
3. don't let any os call  happen outside your encapsulation layer. Not a single one. Add checks into your build system. People will try to throw in quick and dirty calls.
4. Build your encap layer as you need it. Don't have a separate effort. You'll probably find you can get away with a very thin layer so the work will be less than you think. You probably won't  use much of each facility.
5. I would have class for thread, mutex, etc, not one encap class with a very wide interface.

son of parnas
Monday, June 28, 2004

Whether or not a portability layer helps depends strongly on what OS features your program needs. With basic file opening/closing, directory-making stuff, it can work great. Sub-process creation and TCP/UDP socket programming is a little more platform-specific but still workable. Multithreading, shared memory, and non-blocking I/O APIs are very difficult to match semantics across platforms and almost always involve performance sacrifices somewhere to keep things portable.

Dan Maas
Monday, June 28, 2004

Consider using boost

- Satya
Monday, June 28, 2004

Been there, done that, but for an OpenVMS system.  The issue there was all of the 'legacy' libraries I had to support were done in 'C'.

What I did was develop on a Windows Visual Studio 6 system, using the C++ compiler.  Then, I had to implement a 'compatibility' set of 'plain C' library interfaces on the Windows side, which duplicated the functionality on the VMS side.  Use the 'extern C { ... }' construct a lot.

I thought Linux and Solaris had a C++ set of libraries.  If not, you can still use the 'extern C {... }' construct to link in the native library code for system calls.  I suspect these ARE already built in to the system .H files on Solaris.  C++ has been around a while, so the #ifdef's tend to be in place.

I'd suggest you create some simple 'wrapper' classes for the operating system services -- if you don't want to use STL or RogueWave.  Something like FileIO, MemIO, DeviceIO.  Everything is a 'file' under Unix, but those ioctl calls can increase the complexity of the supposedly 'simple' interface.

X-Windows programming is another problem -- but since you are doing system services, you probably don't have to go there.

It would be nice to design an initial interface to Unix, then build a set of plug-compatible API library classes to run under Windows.  Then you can 'lift-off' your app from the Unix API running under windows, move it to Unix, re-integrate it, and go.

At the very least, allow for such a layer between your app and the OS services.  You can tweak the heck out of it when you port, as long as the layer is there.

If you code on windows without a compatibility layer, then you'll probably have to implement a Windows API layer on Unix <shudder>.  Unix has the nicer (simpler) API, IMHO.

Monday, June 28, 2004

I wouldn't try to make one os look like another.
They don't look like each other so it's even more
difficult of  a task.

Just create abstractions for the services you
use and only those you use and only the parts
you use. Don't build low level abstractions. Build
higher level abstractions so you don't have to
come up with a flag, for example, that means
text file across all oses. It just doesn't ever work.

That way, especially because of the differing semantics across systems, you will be safe. You
will probably want to deal with abstractions anyway.
Even simple things like file io and socket io you end
up almost immediately abstracting.

IMHO there's no real performance
issue. You would probably have a library interface
anyway to do things, unless you were going to
copy code all over the place.

Of course develop with unit tests so you can verify
code works as you develop it.

son of parnas
Monday, June 28, 2004

Id like to chime in again if I may ....

Just write it for Windows (if thats the first platform you need) then when you need it for another platform, rename all the windows API's you use as XX<SomeWinApi> and then make that generic to the two platforms. Dont try and create or think what "layer" you should have now. Spend the time on getting the software done. Later, those items that dont compile on platform "X" will give you a nice list of what to abstract and provide a layer for.
If im being too pragmatic then just say so :)

Monday, June 28, 2004

I was on a team that did almost exactly what you're describing: start with Windows, then add Solaris and Linux support.  Overall it was a very successful effort.  Our code base included components in kernel space and user space, but no GUI code. We did not use a 3rd party party library - the effort required to evaluate these can almost equal the effort required to write your own.

What we learned:

1. The 80/20 approach is attractive. Write portable API's/objects  for 80% of the functionality you need, but don't bust your butt writing 100%. Be pragmatic.

2. It really helps to have an in-house expert for each OS. We were lucky; we did. You are writing an abstraction layer, and if you don't understand what's really going on in the underlying code for each OS, be prepared for some debugging time. If you don't have in-house experts then write test code to verify ALL the assumptions you can think of regarding how various API's work in each OS.

3. Don't hesitate to sacrifice portability for performance, IF performance is truly critical. For one of our products performance was absolutely necessary for competitive reasons and we used every non-portable win32 trick we knew.

4. Have good regression test, unit test, and build procedures in place. We did not. It did not affect the final quality of our products, but it did cause some frantic moments and slipped ship dates, when someone would check in a quick "improvment/enhancement" and fail to test the other OS builds.

When we started, my vote was to code everything specific to the OS and port it. The tech lead overruled and dictated the 80/20 approach. Looking back, that was the right decision for our situation because it clearly reduced our workload. I should add that we did have significant sales in all 3 OS's. If you plan for portability and then never sell anything except win32, you've wasted time.

Monday, June 28, 2004

Here's one problem.  The high performance I/O subsystems between linux, other *nixes, and Windows are all completely different.  If you want your code to scale up to thousands of concurrent connections on Linux you really should look at sys_epoll().  On Windows you should use Overlapped I/O.  The problem is the programming model for these two mechanisms is very different.  The Linux model is reactive while the Windows model is proactive.  This could make porting high performance code form one platform to the another almost impossible without taking a significant performance hit. 

This is something to think about.  If you don't intend to scale up to the thousands of concurrent connections level then you should just use a typical thread pool model.  That would be much easier to port, but doesn't take advantage of the high performance subsystems of either system.

christopher baus (
Tuesday, June 29, 2004

Make a list of the platforms you wish to support, ordered by importance. Make sure you're well aware of the difference between the first three; Write it so that it works on all, but generally optimized for the first.

Find an abstraction layer that works well for you and use it, but make way for high performance platform specific hacks in the future.

Qt is good (though not at all cheap) and supports Mac, Win and most Unixes. It does a lot more than GUI these days.

Also, consider using Python. It's amazingly portable, reasonably fast (even more so using Psyco), and -- with some care -- easy to gradually replace with C code for the bottlenecks. Despite what other posters may say, distribution is NO problem whatsoever (unlike Java and at this point even .NET).

Ori Berger
Tuesday, June 29, 2004

Look at it's a cross platform GUI lib (win/mac/unix) based loosely on MFC.
It also includes cross platform IPC, process creation, file and network IO stuff.

Very stable and well documented - we use it for win32 only projects because it's nicer than MFC !

Only drawback is, it was written to be compatible with a range of older C++ compilers, so it has it's own string, vector etc classes - but you can still use modern C++ libs with it.

Martin Beckett
Tuesday, June 29, 2004

Why develop for multiple platforms at all?

Mr Jack
Tuesday, June 29, 2004

The Mozilla Foundation, of Firefox fame, has a C++ portability guide. I think some of what's here is a bit outdated, especially the Templates part, but it won't hurt to have a look.

Tuesday, June 29, 2004

I don't generally do cross platform.  But I do use wxWidgets (formerly wxWindows) anyway because it is just so much nicer and wraps threads and processes and files and networks really nicely.  And if I ever needed to go to unix.. I have the confidence my apps would probably work.

And if you ever do do guis, then wxWidgets is an even better contender.  But I offer this framework for non-gui stuff too, as it is so strong in that area too.

i like i
Tuesday, June 29, 2004

Another cross platform C++ toolkit that's available is Common C++.  Its a bit like ACE, but not as heavy.

I used it in a commercial shipping project to great effect, and its LGPL.

Tuesday, June 29, 2004

There is no such thing as protable code, only code that has been ported.

Wisom of the ages
Tuesday, June 29, 2004

Apache has a portable runtime (APR) that some projects use.  It's only C, I think -- I've never used it -- but it might help.  From glancing at it once, I think it's more on the level you want than wxWidgets (which I have used, and very much like) -- system-level stuff.

Tuesday, June 29, 2004

*  Recent Topics

*  Fog Creek Home