Anyone using TruClient yet? I'm used to getting 50-75 virtual users (Web) per load injector PC. But I just read this an a support PDF from HP?!
Can anyone confirm this? I can't imagine how anyone could run a 500 user test using this protocol, getting maybe 3 vusers per load injector?
################################ How Many Ajax TruClient Vusers Can I Run on a Single Load Generator? The amount of Ajax TruClient Vusers that can run on a single load generator machine depends on the application under test and hardware parameters. Internal HP benchmarks indicate that for various applications under test, a single Ajax TruClient Vuser can utilize around 60-120MB of memory (footprint) and consumes 3-30% of single CPU core. We recommend assessing the resource utilization parameters per application to arrive at a realistic number for sizing of the load test. ################################
What you have read in the article is true. TruClient is a UI level protocol and doesn't work at transport level like web protocol. Memory consumption of truclient Vuser includes browser instance, DOM and JS and it varies depending on the application. It is advised to perform benchmark, sizing test of inidividual application to know the actual footprint of the Vuser.
Madan is right - the AJAX TruClient Virtual User is running at the UI-level and requires more memory and CPU resources per each virtual user.
Keep this simple equation in your mind: 1 hour of scripting = n dollars.
How many hours do you spend working on low-level script correlations? How many times do you have to completely throw away an existing low-level script and start over again (and repeat the scripting correlations)? How many hours are spent trying to explain a web_data_submit() function to a business analyst? Take all of those hours and multiply by n dollars. For example if your time is worth $100 per hour to the company, and you spend 80 hours working on a script...that's $8000 dollars.
If you switch to use Ajax TruClient - most of those low-level scripting complications are eliminated and you save all those hours of time. And, with Ajax TruClient you don't have to completely throw-out the old scripts with each new version of the application. That's even more time savings.
So, then you can spend that $8000 savings on more memory and CPU for load generators.
And with the time savings, generally - you can get to test execution faster, test analysis and results. You spend more time focusing on the application and performance issues...and LESS time messing around with scripts.
It's that simple.
Mark Tomlinson Performance Testing and Engineering Guru http://mtomlins.blogspot.com
That all sounds great, until you realize you still need to verify the application can support large numbers of vusers (1000, 2500, 5000, etc) - and now you have to burn up all that "savings" acquiring a nuclear reactor and renting the entire Amazon ec2 cloud to actually run that load test with Truclient. Higher volume test runs still need to be done. How about that 100,000 vuser load test everyone at EMC was bragging about in 2009? What if they have decided to go all AJAX in the next release and the only way to test it is through AJAX? Indiana Jones would have to find the Ark of the Covenant to be the load generator for that test.
What we're being told right now is that Truclient is the best load testing solution for Ajax applications (if you are using LoadRunner). Most likely it will be the ONLY way to get complex AJAX applications tested, because straight web/http or AJAX C&S won't work. But in exchange for this ability, you can only run 100 vuser tests and all those Enterprise level roll outs where you need to verify 2500 to 5000 vusers - those days are over unless you have the hardware budget of homeland security. It's a wash on the cost.
Don't misunderstand my point here. I love TruClient and I am thrilled that we finally have something that can test some of the more cutting edge AJAX apps. That's wonderful. I love HP for bringing it to us. The fact that you can have it included if you already own the Web 2.0 bundle is awesome. But saying this is a huge cost savings because even Forest Gump can now script the application is not a valid marketing play when the elephant in the room (no pun intended) is that you can only run volumes of vusers at a fraction of what you did before with the same hardware.
Why is it such a bad thing for a performance engineer to get in to the nitty, gritty details of the application during scripting? I've found things SQL Server Administrator passwords being passed within HTML hidden fields of a web app. You would never find that sort of thing with a higher level protocol. Of course, if you're going to hire an unqualified resource to test your critical applications, and all you care about is whether they can use a mouse and keyboard without drooling enough to short out the connections, then you deserve what you get. We'll be waiting to clean up the mess at the end of it, and it won't be for free.
" For example if your time is worth $100 per hour to the company, and you spend 80 hours working on a script...that's $8000 dollars."
Mark - If my time is worth 100$ per hour and I take 80 hours to do a low level script - I deserve to be fired asap or I would expect a 100$/hr resource would not take more than four to six hours to do a complex script
I would add that there is a real benefit to having someone review the traffic. By recording a script at the protocol layor and having to learn how it works, what it is doing and why, I generally end up offering advice to the dev team on ways they can improve site performance. Like: hey guys, you have already made the getUser call earlier in this flow, why don't you cache it? Sure, I could use WireShark but there's something wholesome and healthy about having your performance testers understand your stack and I'd argue, where they do, it is not so much work to put a HTTP script together, not really.
I have successfully ran a 100 user load test with TruClient (Patch2). Using Dual Core 2.8ghz with 2GB ram, I was able to run 8 users per machine.... any more than that and the CPU stayed at 100%.. My test ran for about 1 hour and included 4 scripts, and was a ramp up style test.
I have repeated the test multiple times and have ran a constant load test. I did not experience the 'cpu climb' issue reported in another thread.
I would recommend moving to quad core, high mhz machines with 4gb ram or more as generators to increase the vuser count.
Let me know if you find it useful... or have anything to add.
The HP forums are so obscure most users never find them.. I got here by accident. I've been trying to get HP to put a 'easy to find' URL on their forums site... like forums.hp.com.. or forums.bto.hp.com... but no luck so far. I guess h30501.www3.hp.com is easier to remember
You are correct that when you are looking to run large loads (>1000) then Truclient may become problematic due to the memory requirement.
However, you are probably well aware (as was Mercury when they used to offer the "Double Your Performance" offering) that most applications that expect to handle 1000's of users usually first break in several different areas at loads < 250-500 vusers before they start to see the major jumps.
Why is this important? Because, after you show the developers the app breaks at 50 users, the developers will go back and modify the app which means you will need to create new scripts for the modified app and Truclient will enable you script fast and be as agile as the developers.
[nodding to fmartinez] - you are right, there can always be improvements made on the TruClient technology and I'm certian that the guys in R&D are working really hard to improve client-side measures and breakdown, to enable exactly what you are asking.
Having discrete timings for asynchronous client-side actions/components is incredibly valuable. To do this right - in the MOST ACCURATE WAY - we would need to have a single virtual user running by itself on a load generator, so the local resource utilization didn't impact the measurements of the UI response times. In your experience, can you think of any existing solution for running a single end-user session with the full client GUI taking response time measurements of the end-user experience?
It's called a LoadRunner GUI Virtual User.
Mark Tomlinson Performance Testing and Engineering Guru http://mtomlins.blogspot.com
[smiling at mtomlins] - really no point in developing a GUI virtual user script... just run a single TruClient user on one generator... it's not like the scalabiliy allows you to run much more than that
On a more serious note... it is possible to run a high volume test (couple hundred vusers)... you will just need alot of machines... or machines that are much newer. As in my previous post you can expect maybe 8 users or so on a dual core 3ghz machine. This 'old' hardware is just not going to cut it anymore. It is time to budget for new equipment.
The good news is hardware is 'cheap' these days. Look for something in the line of an Intel i7 970 or higher (6 core).. or an i5 2600 (4 core). The i7's are about $1500 on HP site and the i5's are about $900. You also want at least 6GB ram.
We are working on a project to benchmark various hardware to see what scalability we can achieve... so hopefully I can share our results in the future.
We hope to get in the range of 25 to 35 users per machine.. we are guesstimating a scalability of 6 to 8 users per core and a cost of $40 per user... so an estimate would be in the range of $20k for around 15 machines which should put us close to 500 vusers.
Seems to me there is a solution here that gives the best of both worlds, accurate real life response times giving the actual user experience as well as a scalable load test solution. Use both, TrueClient & WEB.
OK, ok, yes, this has the overhead of doubling up on scripting and cost but to me, it actually sounds like an attractive enhancement to the old-style (I'm only testing the server) way of doing things. In the situation where you are working on a high transaction application requiring the simulation of high user concurrency then you really have to stick the traditional HTTP layor test aproach - it's insane to try and scale up at the GUI level too far. But testing at the HTTP level always leaves the gap in your results of what about the real user experince? You can plug this in various ways like using QTP or some other browser based script but now we have a fully featured solution that give us this data and is totally integrated into the scenario.
TruClient offers a complete and excellent solution to those horrible hard to script internal sites that do not require large numbers of virtualised users - if that's you, then great, off you go - but it also acts as a great add-on to any test where the volumetrics demand more traditional methods and it is not sensible / cost effective to try to scale at the GUI level.
And really, the extra effort should not be so great because you have no need to script everything twice, just use TrueClient on the main critical paths that give you some key metrics and then have the main test body running using WEB script. A few extra GUI TrueClient users to give you a far richer dataset.
Using this setup, you could then say: "With the system under the peak load, as defined, we saw that 90% of login requests sampled were returned to the end user within 2.3 seconds. For the same request 90% of requests were served by the server in less than 0.8 seconds giving a 90th percentile for page load time of 1.5 seconds." Then you can be all smart and talk about cache headers and gzip, the point is it is realistic page rendering time, whilst the system is under load. It is easy, repeatable, it takes account of CDNs where you might want to avoid static content in the majority of your requests and you get to do all this from one tool, with all your data stored in one place. Nice, no?
The best test environment is the production environment.
We have the users doing their business processes exactly like we want them to and they experiencing the the constant assessing the quality of service delivered vs what is expected (SLA)
Loadrunner (url, http,ajax,trueclient) helps us predict the level of performance the users will experience and highlight the amount of resources required by the system to support that performance.
The more realistic the workload is to how to it be used in production the better I can provide recommendations to realize the optimal performance levels
What we have recently realized is that when a user says it's so slow to do a search... the reason that it is slow is that the browser (client) pegged at 100% cpu because is caching all the data on the browser....
Hi, this is a great thread. I just started a contract project at a client. They had the contoller AND generator running on one PC using TruClient protocal scripts. In doing the tool discovery, I was instantly amazed that the setup worked with a 47 vuser test. And that was BEFORE I read this thread. I had not realized TruClient scripts used that much memory and CPU.
Now to the whole reason for my posting, my last place of employment opened a whole new world of generator flexability to me that I think will make this whole TruClient memory/CPU usage a little more tolarable. We had a large VMWare system and I had two VMWare Server systems (instances) built at my last job. For anybody that knows VMWare, you know that it can be scaled up and down on memory and CPU as needed. I found that we have VMWare here also and am on the road to getting them built here. The other nice thing is that once it is built, a snapshot can be taken of it and more can be spun up in an extremely short timespan.