Fog Creek Software
Discussion Board




#define PRIVATE static

According to one of the reviews for 'Programming Industrial Strength Windows' on Amazon, the author uses the line '#define PRIVATE static' in his code . Now, is it just me, or would the rest of you instantly discount the book on the grounds that anyone who would use such a line is obviously not capable of programming in either C, or C++?

Mr Jack
Wednesday, August 20, 2003

That seems sort of shortsighted. With the attitude that you are not going to read any book by any non-perfect author, I don't think you're going to find much to read.

I don't know the details of the code in question but a define like this actually looks like one way of adding documentation to C code. I wouldn't do it that way, but it's not really in the category of death-penalty.

Joel Spolsky
Wednesday, August 20, 2003


The person who authored that unfavorable review also discounted the book because it discussed 'roll-your-own' install programs.  His point was that a zip file should suffice for installation.  He also ranted about how much easier it is to 'whip together' the sample applications using AppWizard and MFC instead of laboriously building them from scratch using the API and a custom framework.

I'm more inclined to discount the reviewers opinion on the grounds that he obviously doesn't understand the issues involved in building real-world shrink-wrapped Windows programs, and that he completely missed the point of the book.

OTOH, I would never use an abomination like '#define PRIVATE static' in my C++ code :-)

Craig
Wednesday, August 20, 2003

It's pretty rare that I read anything where I totally 100% completely agree with the author.

Just because an author says something stupid, doesn't instantly disqualify him from having a good point here and there.

Heck, I say plenty of stupid things but every once in a while I actually say something that is correct. :')

Mark Hoffman
Wednesday, August 20, 2003

The static keyword has effectively changed meaning as it moved from C to C++ and other languages.

I'm assuming that the author is talking about using this technique in C code in which it makes some sense.  In C the static keyword means that a function can only be used within that peice of source code which can help to prevent name clashes and people using functions you didn't intend to let them.  A wholly useful construct and the PRIVATE define is a useful way of marking these functions.

In fact it makes a lot more sense to use it now since in C++ and Java etc. the static keyword is used to represent something quite different when applied to classes.

Colin Newell
Wednesday, August 20, 2003

Yeah the deal is that static has a few uses which are quite confusing for the beginner.  I program in C every day and I only really remember the most common ones:

static applied to a functions/variables at file scope: only visible in that translation unit

static applied to a local variable: actually has global storage, persists between function calls

So these are totally different uses.  The second one is like Java I guess.

Same with extern, it's totally a stupid hack.

The danger is that you could end up with stuff like:

void f()
{
  PRIVATE int f;
}

hehehe...

I wouldn't use it, but I don't see the harm.  I could get used to it.  Microsoft has a whole bunch of #defines for types in their code I think.  In Code Complete, there is some really stupid C advice, like "use the star rule to tell if a parameter can be modified", haha... or you could actually learn how pointers work.

Andy
Wednesday, August 20, 2003

Welcome to stupidity 101:

#define PRIVATE static
#define IN, #define OUT //someone doesn't like const
typedef void* PVOID;
#define struct interface;
#define BEGIN {
#define END }
typedef unsigned short WORD;
typedef unsigned long DWORD;
typedef const char* LPCSTR;  // what an abortion

Nice to see this topic cycle around every now and then.  If you don't like the C language, don't use it.

buttering my bacon
Wednesday, August 20, 2003

Let the "I'm a bigger C expert than you are!!!!" begin.

Get Over Yourself.
Wednesday, August 20, 2003

Some of the defines you've listed have very valid purposes for existing.  Maybe you'll learn why Microsoft did that sort of thing back in the 16/32 bit days over the next year or so as we move into the 32/64 bit era.

While I personally wouldn't use the #define mentioned from the book, I think it says more about how horrible bits and pieces of C (and by extention C++) are when it comes to clearly and explicitly specifying what you want.  I'm perfectly familiar with the many and varied uses of both static (and const for that matter), but I'd still have trouble keeping a straight face trying to explain to a new programmer why 'static' was chosen to mean file-global (or, uh, file-PRIVATE) when used in that context.

Mister Fancypants
Wednesday, August 20, 2003

And, for what its worth, I'm not a bigger C expert than "you".

Mister Fancypants
Wednesday, August 20, 2003

The reviewer sounds like one of these heroes who's still celebrating the fact that he can get his C++ programs to compile, and wrongly presumes what he knows must be more than what everyone else knows.

.
Wednesday, August 20, 2003

Agreed, WORD and DWORD have valid uses since short and long are platform dependent.  We write all our code that way, with int8, int16, int32, uint8, uint16, uint32 typedefs.

This makes so much sense in fact that they added it to the C99 standard -- I think they're called int8_t, int16_t, etc.

The other stuff is just cosmetic, PRIVATE/static, PVOID/void*, and is just in favor of "readability", which is inherently subjective.

Andy
Wednesday, August 20, 2003

Why WORD and DWORD are poor choices:

Because it you wanted a 32 bit signed int, you would use int32_t, and it you wanted a 16 bit signed int you would use int16_t (or equivalents).

Since most 32 bit CPU operate on 32 bit words, the use of 'WORD' is a poor choice and will continue to be a poor choice for 64 bit code (hmmm, QWORD).

Again, 'readability' -- the #define choices are more readable if you don't like the C language.  Good for you.

buttering my bacon
Wednesday, August 20, 2003

oh no, 64 bit windows apparently does much worse than QWORD.
DWORD_PTR = something wacky. it's 64 bits for a 64bit pointer. argh, who came up with this? maybe they'll do something more sensible before it goes mainstream.

http://msdn.microsoft.com/library/en-us/win64/win64/rules_for_using_pointers.asp

mb
Wednesday, August 20, 2003

"Because it you wanted a 32 bit signed int, you would use int32_t, and it you wanted a 16 bit signed int you would use int16_t (or equivalents)."

That's all well and good except for the fact that there was no int32_t or int16_t back then. 

Also it is worth mentioning that despite the traditional hardware definition of what a word is, it is becoming increasingly common for a "word" to be understood as meaning 16 bits, and a "double word" to be understood as meaning 32 bits, even on non-x86 platforms and even among the likes of hardcore PLC programmers.  So, yeah, I'd be fine with QWORD for 64 bit data.

The anal will whine and say that specific word size is the number of bits the processor can handle in one operation, but in a world where it is common for a processor's registers to be different sizes than the processor's data buses, it becomes a hard point to argue exactly what a 'word' size is even on a specific CPU.

IOW, the usage of the word 'word' has changed.  Don't be a hanger-on, embrace the change.

Mister Fancypants
Wednesday, August 20, 2003

I have heard there is some confusion regarding platform independent C macros.  To eliminate this confusion, allow me to clarify once again:

An INT is an int.  A UINT is an unsigned int, which we also call a DWORD.  A DWORD is two WORDs.  A WORD is so named because it is equal to half a word on all modern processors.  WPARAM stands for a WORD param, which is, as one would expect, not a WORD, but rather a processor word, also known as a LONG.  A LONG, of course, is a signed DWORD.  For convenience you may also call a LONG an LPARAM.

A pointer is the same size as a LONG.  A long pointer is the same size as an INT.  None of these are the same size as a C# or Java long, but an __int64 is.  An LPSTR is a PSTR, which is archaicly known as a char*.  Sometimes a LPTSTR is also LPSTR, but lately it's more likely to be a LPWSTR.  A LPWSTR (also known as a PWSTR) is also an OLESTR, and although it is a short* like a BSTR, you should never pass it to a function that actually requires a BSTR.  Since BSTRs are so important, we also provide _bstr_t.

Also keep in mind a HANDLE is a void* (or a PVOID), which is a pointer that doesn't point to anything.  This is not to be confused with a null pointer, which is also a pointer that doesn't point to anything.  But a HANDLE doesn't really point to anything, it just functions as an opaque integer.

Clear?

Alyosha`
Wednesday, August 20, 2003

Shit, am I glad I rarely touch microsoft code.  It always looked funky to me and now I know it actually is funky.  ; )

Andy
Wednesday, August 20, 2003

"Since BSTRs are so important, we also provide _bstr_t"

Don't forget CComBSTR.

At any rate, Microsoft realizes what a huge mess this has become over time, which is part of the reason they are pushing Windows programmers to .NET, which has an extremely well designed and consistent set of types and framework libraries.

Mister Fancypants
Wednesday, August 20, 2003

Thanks Alyosha, I had managed to forget most of that, now its lodged in the upper right quadrant of my forebrain.

The contract has been taken out.

Simon Lucy
Wednesday, August 20, 2003

Why doesn't everyone just use LSA_UNICODE_STRING?

as
Wednesday, August 20, 2003

That business about LPTSTR -> {LPSTR, LPWSTR} brings up what I think is an important point.  C/C++ should have a byte data type, and sizeof(char) should vary between unicode/ascii systems.  A byte data type would simplify so much.  In fact, it could make byte-ordering of larger types an obscure and unimportant detail.  Just imagine being able to say something like this:

byte Base256Seq[sizeof(int)] = reinterpret_cast<byte*>(&some_int);

K
Wednesday, August 20, 2003

OK, <casting aside comments about the C++ oneupsmanship>, how is this something you can't do now, and how does it make byte swapping any more unimportant than it is now? (which is, not that important)

So you're casting a pointer to an int to an array of bytes.  That's how everyone I know has done byte swapping.

And, unless I'm missing something, you can already do what you're asking about, just do

typedef unsigned char byte;

#if UNICODE
  typedef wchar_t my_char_t
#else
  typedef char my_char_t
#endif

and use byte and my_char_t everywhere.  Maybe not as clean, but it works.

Andy
Wednesday, August 20, 2003

Andy,

I'm aware of those 'workarounds' but they don't solve the problems that I'm trying to address here.

Byte ordering is important to me, whether or not you have a particular concern about it.  What I was describing was a clean way, in my opinion, to solve a couple of problems -- standardizing byte ordering and treating text in a simple abstract way (related problems, actually).

Systems vary on language support, processor word size, floating point support, and so on.  There's no reason for code in a transition to a 64-bit processor to use __int64 when it doesn't actually care about the size of the signed integer type.  There's no reason to care about a character type either -- whether a character is one byte or two is usually not important to a text processing algorithm.  It also happens to be nice to cast a character to an endian-agnostic format for cross-platform storage or encryption or some such thing.

The fewer platform-specific assumptions necessary and the simpler and more obvious the primitives, the better the language/library (in my opinion).  The concept of an atomic type is valuable (bits or bytes, whatever is practical).

K
Thursday, August 21, 2003

OK, uh, so how does what I wrote not address all the problems you listed?

If I read correctly, it is equivalent to what you suggested in your first post.

It's maybe not as "nice" as having it in the language itself, but unless you are part of the C++ standards committee, I would just hide it in a header and pretend such "ugliness" doesn't exist.  : )

Andy
Thursday, August 21, 2003

And this, my friends, is why I switched to Visual Basic and why I LOOOOOVE .NET.


bunch of geeks ;)

Geert-Jan Thomas
Thursday, August 21, 2003

Joel,

  It's not really a question of non-perfect, it's a matter of doing something which is an utter, total abomination; non-perfect I can live with, utterly clueless I can't. Obviously I've not read the book, so it might not be a big deal, and the rest of the reviewers post did make him look rather stupid.

And #define PRIVATE static is not a documentation technique, it's a language mangeling and obfuscation technique. No better than using #define begin { and #define end }.

Mr Jack
Thursday, August 21, 2003

Listen Andy, save the condescension for your children.

You can't cast a pointer to a multibyte type to an array of bytes and get the same sequence on different platforms.  You can't interoperate with years of string libraries written with the assumption that a "char" is a character when you switch to Unicode with a character type local to your particular modules except by one-off hacks that may or may not work under recompilation (eg: #define char short at project scope in each library and recompile), and certainly won't help you by providing the compiler warnings that could easily be inserted if you implicitly treated a character like a byte/byte-sequence in the way described above.

This would also have the benefit of making basic_string and basic_ios much simpler classes and let us get rid of hacks like char_traits.

K
Thursday, August 21, 2003

Woah woah, calm down, just answer me this:

Is your suggestion equivalent, or is it not, to the 5 lines of code I wrote?  If it's not, then I am not understanding correctly, and I will learn something new (which is good).  If it is, then what you're suggesting is just syntactic sugar, and thus I could give a crap.

Also, is your beautiful line of code equivalent to (in plain C):

char Base256Seq[ sizeof( int ) ] = (char*)&some_int;

If it is, then uh that's what people have been doing for say 20-30 years.

You do realize that, in addition to preventing you from making some textual changes to the libraries that you use, redefining char would also break basically *all existing C++ code*.

Maybe bring it up on comp.lang.moderated.c++.

Andy
Thursday, August 21, 2003

Andy, once again please save your condescending remarks for your children.  I assume that you're either very young (in which case condescension is the norm and should be expected) or you're remarkably thick.

To answer you, no your suggestion is not equivalent.

K
Thursday, August 21, 2003

No seriously, I am trying to understand why it is not.  What's the difference?  What could you do if it was built into the compiler that you couldn't do with the typedefs?  Maybe I am thick.

Also, why is C statement not the same?

Do you still think char should be redefined even though it would break all existing code?  Why not add another type like char_t that varies in size?  Oh wait, that's what typedef is before: to add new types.

And yes I'm actually only 13.

Andy
Thursday, August 21, 2003

Andy, the purpose of "byte" is to define an atomic type.  If you care that much about knowing why your "fix" isn't equivalent then look it up.  I'd be more inclined to help you if you didn't insist on throwing in a self-congratulatory comment in every freaking post (as if I don't know what typedef is for).

K
Thursday, August 21, 2003

Define atomic type.

What should I look up?  I looked up "C++ atomic types" on google groups, and basically everyone said that there is no such thing as an atomic type in C++.  The word is primitive, which basically means non-aggregate types, i.e. char, short, int, long, float, double etc.  Which doesn't make any sense in the context you use it.

Note sizeof( char ) is defined as 1, which means that sizeof( T ) % sizeof( char ) == 0 for all types T.  So if you mean what I think you mean, char is atomic.

What about the other 2 questions?

Andy
Thursday, August 21, 2003

Look up "pedantic" and "purposefully obtuse."

K
Friday, August 22, 2003

K, you are pathetic.

7 parts per million
Friday, August 22, 2003

Look up "made trivial, pedantic remark, defended remark with meaningless, obscurantist bullshit, and refused to acknowledge better solution because of fragile ego and low self-esteem, while not answering simple questions and insulting someone who was trying to teach him something"

So I take joy in deflating charlatans.  Call it a character flaw.

Andy
Friday, August 22, 2003

That was pretty pathetic...

K you are the weakest link!

Jon
Monday, August 25, 2003

Recently I was browsing the source code of the famous Bourne shell, from the late 1970s System 7 Unix.

(For those of you that prefer Windows, the Bourne shell (or an equivalent) is the standard shell that is invoked when you use the system() command to run a program. It's the basis for GNU's bash shell. It's used both interactively and for shell scripts.)

When I first looked at the code, I wondered if it was C code, because he redefined so much of it with macros.

For instance, he defines:

#define IF    if(
#define THEN    ){
#define ELSE    } else {
#define ELIF    } else if (
#define FI    ;}

and a couple dozen others.

His macros can be seen here:

http://www.tribug.org/pub/tuhs/PDP-11/Trees/V7/usr/src/cmd/sh/mac.h

An example of his source code using these macros is:

http://www.tribug.org/pub/tuhs/PDP-11/Trees/V7/usr/src/cmd/sh/main.c

I don't have a point in showing this, other than the curiosity of it. It was intriguing source code to read.

Alain Roy
Friday, September 05, 2003

*  Recent Topics

*  Fog Creek Home