Fog Creek Software
Discussion Board




Amazingly stupid, simple C++ question

I need to write a very, very simple program that writes out the computer's name a text file and then copy this file out to a shared drive.

It's been eons since I've done C++, and I'm struggling with strings. Here is a code snippet:

char* temp = "<?xml version='1.0' standalone='no'?>\r\n";
outfile.write(temp,sizeof(temp));

Now..I know this won't work, because sizeof is gonna return 4, because it considers temp to be a pointer and does a sizeof on the pointer itself.

So, how do I write a string like this to disk? How do I need to declare it so that I can use sizeof on it?

Also...my code snippet to get the computer name (We're talking Windows here) is:

char szBuffer[256];
DWORD dwNameSize=256;
GetComputerName(szBuffer, &dwNameSize);

this works, but the szBuffer will always be 256 bytes long. How do I trim this? Is there a better way to call this?

Please..be gentle..I haven't used C++ in years and I just need to finish this tiny app, then I promise I leave your language alone and go back to my C#!

No way. I feel like an idiot already.
Saturday, January 31, 2004

char temp[] = "<?xml version='1.0' standalone='no'?>\r\n";
outfile.write(temp,sizeof(temp) - 1);

Or something like that.

mackinac
Saturday, January 31, 2004

Why not use a string class, such as the C++ standard one?

For the file I/O, why not use the stream operators? Then you wouldn't need to know the length when you output it (it'll output to the null character, with c-strings, and it'll use the stored length in STL strings). Unless you are using some MS specific class?

If GetComputerName takes a pointer to an already allocated char buffer, the docs should say how big that buffer is expected to be. And it looks like the length written to the buffer will be stored in dwNameSize (since you pass it as a pointer). At least, that'd be my guess, you should check the documentation.

I don't know much about windows development, but on unix I'd accomplish your task with a shell script. I know windows has *some* scripting capabilities, so it may be simpler to use them. Hell, you could probably pull the computer name from an environment variable and use echo in a batch file to accomplish your task.

Mike Swieton
Saturday, January 31, 2004

Gawd macinac don't say if you don't know. First, that's not a minus you want there.

Here's the C++ way:

outfile << "<?xml version='1.0' standalone='no'?>\r\n";

and here's the C way:

fprintf(outfile, "<?xml version='1.0' standalone='no'?>\r\n");

Dennis Atkins
Saturday, January 31, 2004


Thanks Dennis!

That was much simpler and works great. I've still got some weirdness with the computer name and trailing characters, but I think I can Google that and figure it out.

thanks again

No way. I feel like an idiot already.
Saturday, January 31, 2004

> this works, but the szBuffer will always be 256 bytes
> long. How do I trim this? Is there a better way to call
> this?

Yes. The size parameter is also an OUTPUT parameter.

-----

char *szBuffer=NULL;
DWORD dwNameSize=0;

// Fails, because buffer too small
GetComputerName(szBuffer,&dwNameSize);

// Necessary buffer size is in dwNameSize
szBuffer=new char[dwNameSize+1];

// ALWAYS CHECK NEW OBJECTS
if(szBuffer!=NULL)
{
  // Buffer is now big enough
  GetComputerName(szBuffer,&dwNameSize);

  // Some Win32 APIs don't null-terminate
  szBuffer[dwNameSize]='\0';
}
-----

I can never keep track of which Win32 APIs do and don't null-terminate strings, so I always assume they don't.

Caliban Tiresias Darklock
Saturday, January 31, 2004


Ok..one more simple question that has me confused.

What is the best way to turn a number into a string? I've generated a random number using rand, and I want to use this as my filename.

Do I use the ostringstream class? I see some stuff on Google, but I also see concerns about memory leaking.

I just need to be able to take my usigned int, convert it to a string and add an .xml extension to it. This seems so simple, but I see about 43,213 ways of doing it.

No way. I feel like an idiot already.
Saturday, January 31, 2004

char str[12];
srand(time(NULL));
int num=rand();
sprintf(str,"%d",num);

Caliban Tiresias Darklock
Saturday, January 31, 2004

Oh yeah... your .xml extension. Sorry. That last line ought to be:

sprintf(str,"%d.xml",num);

There are more C++ ways to do it, of course, but I don't mess with those too much.

Caliban Tiresias Darklock
Saturday, January 31, 2004

A few things:

Don't use sprintf. Ever. Under any circumstances. This is because it is difficult to tell whether or not the buffer you give it will be big enough. If you must use an sprintf method, use snprintf, which can check the size. It's not a standard method, so you may have to go and find an implementation.

Two: std::string is your friend. You won't leak memory with a stringstream and output to a std::string object. Don't fight C++ :) If you're using C++, at least consider doing it the C++ way. Basically everything C++ adds was meant to make it easier to develop good software in than C. Whether it acheived it or not is debatable, of course. Extremely debatable.

As I recall:

std::stringstream formatter;
std::string output;
formatter << someNum << ".xml";
formatter >> output;

Mike Swieton
Saturday, January 31, 2004

I like sprintf myself and find it is perfectly safe... when used appropriately.

"Never use"? I hope you are exaggerating...

Me, I'd say , "whenever" you use sprintf, make sure there is absolutely no possibilty of overflowing the input buffer.

The * operator in the printf language is your friend.

Dennis Atkins
Saturday, January 31, 2004

But snprintf() truncates your string. So when your buffer is too small, you don't get an access violation -- you get the wrong filename. At least an access violation tells you that something is wrong.

Caliban Tiresias Darklock
Saturday, January 31, 2004

I agree with the other posts in this thread that you're better off using streams and the standard string class, but since I didn't see anyone mention it previously, what you were really looking for in the original post was strlen() rather than sizeof(). 

Assuming your char pointer string is null terminated, strlen will return the number of characters in it.  If you want to be pedantic you can then multiply that number by sizeof(char) to get the size of the string data, but other than in very odd environments sizeof(char) will always be 1.

Mister Fancypants
Saturday, January 31, 2004


Thanks everyone for the helpful posts! I really appreciate it. If I had gone to the newsgroups with such a simple question, I would be have been yelled at with comments such as "RTFM you newb!!"

Thus ends my quick adventure with C++. I'm now retreating back to the land of C# where I actually feel like I know what I am doing.

Thanks again.

No way. I feel like an idiot already.
Saturday, January 31, 2004

Caliban:

You only *might* get an access violation. You could just overwrite other variables in the stack, or if it's coming from user input, it could just smash the stack and run arbitrary code.

Snprintf will usually tell you when it truncates, too. According to my linux manpage, the GNU libc up through 2.0.6 return -1 when the buffer was exceeded, and after 2.1 follows the C99 standard which specifies that it returns the length it would have written, had it not truncated, which can then be compared against the buffer size to see if a problem occured.

Mike Swieton
Saturday, January 31, 2004

> According to my linux manpage

But this is a Windows program.

Besides, if you're not paying attention to your buffer sizes, you're certainly not checking your return values. Anyone smart enough to check the return from snprintf() is smart enough to add up the maximum buffer size they need for sprintf().

Caliban Tiresias Darklock
Saturday, January 31, 2004

This is a little known (or under-known) distinction in C/C++.

void f() {
  char* temp = "crap";
  // sizeof( temp ) = size of any pointer, commonly 4
}

The above declares a pointer on the stack, which points to a string in memory, some data segment.

void f() {
  char temp[] = "crap";
  // same as char temp[5]  = { 'c','r','a','p','\0' };
  // sizeof( temp ) = 5
}

Declares an array of characters on the stack, and the compiler counts the characters and determines how big the array should be.

But the answer is that why use sizeof -- you can should use strlen().  I guess for a string known at compile time, sizeof is more efficient, but you don't know the computer's name at compile time.

Andy
Saturday, January 31, 2004

> smart enough to add up the maximum buffer size they need for sprintf().

That's a dangerous attitude to take.  How many more Buffer Overflows will we need before people learn to *never* take those risks.  Sure, if you're writing embedded software then the few ops it saves you are worth the time.  However, if you're writing *anything* that gets used on the network or on a desktop PC, you should steer clear of sprintf and his other non-n function friends.

Koz
Sunday, February 01, 2004

"There are no stupid questions, only stupid people."

<jk>

Bill P.
Sunday, February 01, 2004

> How many more Buffer Overflows will we need before
> people learn to *never* take those risks.

If you know it will never fail, it's not a risk. Look at this code.

char s[12];
srand(time(NULL));
int num=rand();
sprintf(s,"%d",num);

Where is the risk of buffer overflow?

1. The format specification is changed.
2. The int type is larger than 32 bits.

Okay, so we have two potential risks. If they do not exist at compile time, will they exist at run time?

No!

So there's no risk. I know when the program is compiled whether the risk is there, and since I will undoubtedly fix it, the risk will simply never exist on the end user's system. The only other person who can introduce these risks is a programmer who modifies and recompiles the code, in which case he should understand the risks and do something about them. If he doesn't, it's not my fault.

Caliban Tiresias Darklock
Sunday, February 01, 2004

"The only other person who can introduce these risks is a programmer who modifies and recompiles the code, in which case he should understand the risks and do something about them. If he doesn't, it's not my fault."

I'm not sure that it wouldn't be your fault. The person modifying the code might not understand the risks. Perhaps they're rushed; perhaps they're not a C programmer. It's up to you to write code that is not only safe now, but will be resistant to future error.

(Okay, I'm not fussed about this particular example, just thought I'd comment on that attitude in general.)

Xaja
Sunday, February 01, 2004

Wow! That's a new one! So I write good, bug-free code and later, somebody who doesn't know how to program comes along and changes it introducing bugs. And its my fault since I should have written 'code that is impervious to errors when modified'.

Just out of curiosity, xapa, what country do you live in?

Tony Chang
Sunday, February 01, 2004


When coders collide...

Mark Hoffman
Sunday, February 01, 2004

Sorry guys, I still say you should code defensively. There's a difference between writing bug-impervious code, and code that just begs for bugs to be introduced. Use of unchecked buffers is always less safe than the alternative.

It is a ridiculous argument to try to excuse dangerous code (of which perhaps this simple sprintf usage is not a perfect example, but I think we can agree that fragile code does exist) by saying that "well, you can't expect perfect code!" No, I can't expect perfect code. But I can fully expect you to write code that guards against known dangers. No, you can't perfectly future-proof your code, but that doesn't mean you shouldn't try.

Mike Swieton
Sunday, February 01, 2004

Yeah, that is taking a pretty naive attitude to say "well *I* did it right, it was the other guy's fault."  Maybe so, but don't you care about your product?  Wouldn't you want to take some perfectly reasonable precautions to make it a little better?  AT LEAST add an assert after the sprintf.  Let's get real here. 

Andy
Sunday, February 01, 2004

> It's up to you to write code that is not only
> safe now, but will be resistant to future error.

It is. If you make an error in that code, it will break. You will have to either fix it or ship broken code. If you ship broken code, people will notice that it's broken and either fix it or stop using it. Then there won't be an error, because either the code will be fixed, or people will not run the code that causes it.

This also gives the modifying programmer a reputation. If he fixes his problems and ships good code, then he gets a good reputation, and is encouraged to write more code. If he just ships broken code, then he gets a bad reputation, and is encouraged to Stop That.

The alternative is to allow bad programmers a way to ship good code, which gives them false confidence and a false reputation. That false reputation may allow them to get onto a major project where they will mess everything up and cause major headaches for the whole team. Furthermore, it prevents good programmers from earning good reputations based solely on their code, because good code is no longer the sole province of a good programmer.

Caliban Tiresias Darklock
Sunday, February 01, 2004

Yeah, sometimes I purposely insert bugs in my code, just to make sure that the next guy who comes along won't accidentally gain a good reputation.

Andy
Sunday, February 01, 2004

Of course he will. He'll fix them.

You, on the other hand, will not score many points with your buggy code.

Which evens everything out just the way it's supposed to be. The guy who writes crap gets a bad reputation, and the guy who fixes it gets a good one.

Caliban Tiresias Darklock
Sunday, February 01, 2004

And I crack your system when it's in beta and steal all your money.  I buy an island and you have a crappy reputation for writing unnecessarily insecure code.  Your boss takes you into the office and asks 

"Why did you do fail to code defensively?!"

You answer

"I didn't want the next guy to get a reputation he didn't deserve."

Remember,  your arrogant (c.f. defensive) coding style won't always cause crashes,  it might just 'work' 99.999% of the time until someone comes along and deliberately messes around with your input.

Cheers


Koz

Koz
Monday, February 02, 2004

But we're not talking about coding to defend against malicious users who may attack the system. We're talking about coding to defend against stupid programmers who may damage the source.

How can we be sure? Well, because no user can affect that code. There's no input from any user anywhere. The only way you can make that code fail is to go in and edit the code, whether you do it in the source or object form.

I protect my code against users, because they are not programmers and should not have to be programmers. They should not need to understand cryptic error messages or power-of-2 buffer limits, and when they make a mistake I should do as much as possible to correct it. If the user closes his database connection and then enters a new SQL query, I should not say "the database is closed" and make him open it again. I should make a reasonable effort to open the database. That's where code should be defensive. And if it is not defensive, YES, it *is* my fault! A buffer overflow is not the fault of a user who entered lots of data, it is the fault of a programmer who did not pay attention to the amount of data he was putting in the buffer.

But when someone opens up a source file, it is likely that he IS a programmer, and if he wants to edit that source I am of the opinion that he SHOULD be a programmer. And even if this is not the case, I *cannot* prevent him from doing stupid things in the code! If he has the code, he can do whatever he wants in it, and I cannot stop him. Even if I could, there is a significant possibility that the programmer is smarter than I am, and that I am actually preventing him from doing something really brilliant that will make the code a whole hell of a lot better.

That doesn't exactly strike me as arrogant. It seems to me that this attitude is *realistic*. If I give you access to my source code, I am at least *hoping* that you will be smarter than I am and make the code better. If I designed the whole thing under the assumption that you are probably an idiot and will just screw everything up, THAT would be arrogant.

Caliban Tiresias Darklock
Monday, February 02, 2004

> If you know it will never fail, it's not a risk. Look at this code.
>
> char s[12];
> srand(time(NULL));
> int num=rand();
> sprintf(s,"%d",num);
>
> Where is the risk of buffer overflow?

If you recompile the app for a machine with 64-bit ints this will overflow.

Anonymous Coward
Thursday, February 05, 2004

Anyone recompiling an app on a new platform should know what they are doing. If they don't, it's not my problem. If anyone calls me up and says "your app doesn't work on 64-bit Windows", I'll say "nope, sure doesn't, we don't support 64-bit platforms". End of story.

Unsupported means exactly that. I don't have to worry about it, and I don't have to account for it. If you want to do it anyway, you're working without a net, and if you fall and bust your head open it's your own damn fault.

Caliban Tiresias Darklock
Thursday, February 05, 2004

Caliban, Put down the shovel. The hole you have dug yourself is deep enough.

Gyro Jennings
Saturday, April 03, 2004

*  Recent Topics

*  Fog Creek Home