We are facing a problem. There is a great delay executing DB rules (on migrating caller, info, desc, CI, etc from SC to WO). Tried different variants of Rule orders, Condition orders, optimizing rule number, but no result - the delay is about 10-40 minutes!!!
Found in log lots of messages: no of scheduled tasks loaded for this server: 399
What does this mean? Can this cause such delays? May be a great queue of sheduled tasks causes hangs?
Hi, We are facing too! Working on Solaris platform and having an Oracle database, is not enaf to avoid some significant propagation delay during execution of DB rules. For example, we defined some standard RFC with associated WOs in a predecessor/sucessor structure. Sometimes, it takes between 5 and 15 minutes for data migration from the initial service call to current assigned WO! Indeed, 40 minutes is too much, but this could be related with your platforms and server performance. Waiting for good news! Kind regards, Dan
A system I worked on experienced this problem. As the number of rules and users increases the "rule thread" starts to struggle. We found that the situation was much improved by running 2 Application Server Instances on the one box (or on each box if you already have several). This gives an increase in "rule threads", spreads the load of connected users out and generally makes better use of App. Server hardware (assuming you see enough spare memory now). Of course you should also re-appraise your rules to make sure they are all still needed! (or whether newer facilities in SP7 and beyond mean you might need less of them!)
We have experienced some instances where the DB rules fire later than expected or never at all. It seems to happen in waves and then they start to work again. The best thing we can do is to start bouncing the application servers and clearing the cache to see if that helps.
Mike - You mentioned having multiple app servers on a single physical server. We have done that and modified the java settings to allocate more RAM to the processes. Any estimation from your perspective on how many "logical" servers you should have running? When you have multiple installed, do you have them all installed in the same directory and just separate with the XML files or do you have a separate install for each? We have the same install directory with multiple XML files. This means all the servers share the same cache folder. One support person told me a year ago to try installing separately so that each server has its own cache folder. Any thougths on that one?