Fog Creek Software
Discussion Board




Load Testing really would require 4 engineers?

I don't know if I really agree with that number.  I think it would only require one engineer, and 10 hours to complete the task.  First, if they use Microsoft ACT, which comes with Visual Studio .Net, they can record navigating to a website, and walking through the flow.  Then, they would need to setup a Performance log to watch the certain counters on the server.  So, this should all take no more than one hour to set up.  Then they just run the stress for at least 4 hours.  Once this is done, the rest of the 5 hours would be used to analyze the results and post bugs as required.  In your scenario, depending on how many connections/users were setup in ACT, the server should have crashed relatively quickly.  Of course, I didn't include time to ramp up using ACT.

Gurpal S. Hundal
Monday, November 11, 2002

In fact, you didn't include a lot of things, like just figuring out and setting up a test case to load test a brand new operation on a brand new feautre of a brand new product.

pb
Monday, November 11, 2002

1) It's 4 "engineer days".  That's a lot different than 4 "engineers".
2) Your own 'off-the-cuff' estimate (10 hours) is close to 2 "engineer days".  And you left out a lot of stuff.

I estimate an engineer day for me as 6 hours (for an 8 hour day).  You can't tell me that the second you get into work you sit down, get into a flow, have no interruptions, no meetings, and then you get up and leave to go home.

In the "real world", I get into work...I read email (that takes time).  I get interrupted (that takes time).  I go to lunch (that stops my 'zone').  I go to meetings.  I'm not able to put in 8 hours of "work" in 8 hours.

How do you even manage to keep a schedule and stick to your completion dates?

Engineering DAYS, silly!
Monday, November 11, 2002

pb, the server that had trouble was the FogBUGZ test server, which has been in service for a while (from what I understand). Hardly a "brand new operation on a brand new feature of a brand new product."

Of course, I think it was precisely that fact that made Joel & Co. slightly complacent; the server had been working fine with whatever load it was getting, so load testing seemed superfluous.

Martha
Monday, November 11, 2002

Having some experience with this, stuff that could add time:
1) Duplicating certain load conditions exactly
2) Generating sufficient test data
3) configuring servers
4) configuring routers
5) creating a network to simulate the production environment

All this takes a lot more time than the actual load test

Daniel Shchyokin
Tuesday, November 12, 2002

And some more stuff (murphy's always waiting for you):

1) surprises you find along the way. I'd list them here, but, I'd have to know what they are, and if I did, they wouldn't be surprises <g>.
2) follow-up and one-off test runs to respond to nearly inevitable finger-pointing. Degree of contention depends on who's involved of course - totally internal, other vendors (those are fun), clients' legacy systems, etc.
3) somebody hit on data generation already, but to expand on it, if the test will require multiple user accounts, especially if they cannot be reused for some reason or other (depends on the target transaction), and if the creation of new users cannot be done through the system's testing interface itself, it's likely the client is going to have to create at least part of your test data for you. They may have to create hundreds (or more) individual user accounts, depending on the testing situation. Well, when the client has to create data for you to use, it's a generally safe bet that they will muck it up somehow and you'll have to take time to carefully check out what they've done for you, then all that's wrong will have to be fixed. All takes extra time. We've had clients that had to create 500-1000 user accounts (took forever for them to get it done), clients that had to establish several hundred records in their legacy db that met specific screening criteria and had no errors in them (didn't want to include error handling in this particular perf. test). Always gets messed up the first time or two or three around before they get your data/users set up right.
4) chasing down the network bottleneck that everybody had failed to notice before now because <fill in reason>.
5) there's some oddball security interface on the application that the automated testing software just can't seem to interact with properly.
6) explaining to clients or other non-technical folks the results (and caveats) of performance testing, in such a way that they don't mis-understand, mis-quote, or otherwise mis-anything the information you developed from testing their system. Like the differences among types of performance-related testing, for example. Or what a "virtual user' is, or why just saying your site can handle 'X-hundred' concurrent users (define 'concurrent', what are the users doing?)


Personally, I enjoy performance-related testing more than functional testing, but it does take a good bit of explaining to most audiences so that they truly understand what the results mean and what they don't.

cheers,

anonQAguy
Tuesday, November 12, 2002

oops, I forgot to finish the sentence "... or why just saying your site can handle 'X-hundred' concurrent users (define 'concurrent', what are the users doing?)"

to complete the sentence, "...or why just saying your site can handle 'X-hundred' concurrent users (define 'concurrent', what are the users doing?) isn't complete or meaningful, and can be grossly misleading."

anonQAguy
Tuesday, November 12, 2002

*  Recent Topics

*  Fog Creek Home