Fog Creek Software
Discussion Board




SCM, Teams & Third-Party Libraries

I am interested in how people manage third-party libraries in a SCM-based multi-developer environment.

My solution for now is to specify into which directories third-party software (libs, dlls, header files) has to be installed relative to the source code tree. So, there's a \base\lib path where the libraries have to be placed and there's a \base\projects\prj_xy directory where the source code is checked out for project xy. \base\ may be different on every developer workstation but the relative path from proj_xy to library lib_z is always ..\..\lib\lib_z. BTW: We do not check in the libs, we only have to make sure they have been installed on the workstations to make the source code compile.

Problems occur when third-party libs have their own setup program and install into ...\Program Files\... (aaargh).

Your suggestions/solutions?

Alex
Friday, April 04, 2003

Our solution is similar to yours - the only big difference is that we *do* use SCM on the third party libraries, treating these as a single project parallel to one under development.

We use CVS - so the directory tree for all third party software is treated as an independent CVS module to which we apply vendor tags: this allows periodic updates to track new releases of the third party software. You can do much the same with Clearcase too, if you use baselines and config specs correctly ( and doubtless for whatever other SCM systems you are keen on ).

Having both the codebase under development and the third party software under SCM, and having both held in the same directory tree makes it very easy to move from machine to machine - very handy for working on- and off-site.

We don't do this for run-time libraries bundled with the compiler - after all, if you've got a compiler installed you'll probably have these installed too.

As for libraries that require something other than copying to their final destination, you are going to have to write an installer for your final deliverable: so why not use a cut-down version to install these for internal development?

HTH.

Gerard
Friday, April 04, 2003

+1 for checking in third party libraries. They have their own section of the tree for us, and we check in the binaries even when we have source (don't really want building them to be part of our "clean build" cycle).

Brad (dotnetguy.techieswithcats.com)
Friday, April 04, 2003

To me, the intent of an SCM system is to help manage and control the build process.  In general, this means I want to be able to go "back in time" and recreate any previous build.  That build must be exactly the same now as it was then.

Pretty much everything that affects the build should be checked in as part of the repository.  Source code, documentation, code generation tools....  The only exceptions are for things which change slowly or simply can't be checked in reasonably.  Those programs and operating systems should be kept on CD or in some other stable format and documented as part of the release (e.g., if version 1.3 is built on Win2K SP2, that should be documented).

I won't comment on backups other than to say whatever system you use is only as good as the backups you keep.

-Thomas

Thomas
Friday, April 04, 2003

Brad, you _DO_ want to recompile any possible source when you do a clean build. You should be doing that every time you change compiler options (exceptions handling, optimization, code generation, threading models, etc.) or install a compiler's service pack. While these do not happen often, the purpose of a clean build _IS_ to make sure that you (or the compiler) didn't forget anything.

Ori Berger
Tuesday, April 08, 2003

No, I really don't. :) If I were changing environments (like, say, .NET 1.0 to 1.1). If I were using C++, I could see your point, but I'm not. The .NET stuff is binaries, not link libraries.

Brad (dotnetguy.techieswithcats.com)
Wednesday, April 09, 2003

*  Recent Topics

*  Fog Creek Home