Project and Portfolio Management Practitioners Forum
cancel
Showing results for 
Search instead for 
Did you mean: 

PPM 9.14 Multicast issue

SOLVED
Go to solution
Highlighted
ppm914
Super Collector

PPM 9.14 Multicast issue

We recently upgraded our PPM application from 8.03 to 9.14.

We also moved from physical server to VM servers. We have 3 VMs clustered  and 6 nodes.

 

Soon after the upgrade we started getting errors in server logs:

 

ERROR server:ClusterMonitor.HeartbeatThread:com.kintana.core.server.cluster.MulticastCommsFailureMonitor:2012/09/05-10:15:20.169 EDT: No multicast traffic has been received on the APP_SERVER_MULTICAST_PORT port since this node was started, even though other nodes in the cluster appear to be active. Please check your multicast routing, server.conf settings, and/or firewall settings."

 

Has anyone received such error?  Any help in this regards is appreciated.

 

Thanks!

12 REPLIES
Utkarsh_Mishra
Honored Contributor

Re: PPM 9.14 Multicast issue

Can you sahre your server.conf (mulitcasting & noding parameters)

Cheers..
Utkarsh Mishra

-- Remember to give Kudos to answers! (click the KUDOS star)
ppm914
Super Collector

Re: PPM 9.14 Multicast issue

Thanks for quick response!

Please find attached

Jim Esler
Honored Contributor

Re: PPM 9.14 Multicast issue

Since you moved to new servers, you are probably connected to different network routers. Make sure multicast support is enabled on the routers supporting your new servers. This support is usually disabled by default.

ppm914
Super Collector

Re: PPM 9.14 Multicast issue

Thanks for your responsr Jim!

 

Could you please elaborate more on "multicast support is enabled on the routers supporting your new servers"?

 

What I understand that I have to ask the network admin to enable multicast support on network routers.

Jim Esler
Honored Contributor

Re: PPM 9.14 Multicast issue

Yes, the network admins need to enable multicast on the subnet that your servers reside on or on the network path between your servers if they are not all on the same subnet.

ppm914
Super Collector
Solution

Re: PPM 9.14 Multicast issue

This is known defect.
 

Cause

Defect QCCR1L44795


Prior to 9.14 and 8.04, Project and Portfolio Management (PPM) does not provide a method to explicitly bind the MULTICAST_PORT multicast socket to a specific network interface card (NIC).


Fix


After installing 9.14, if the MULTICAST_NIC_IP parameter is specified in the server.conf file, the JGroup and MULTICAST Channel multicast sockets will bind to the NIC that is specified by using the MULTICAST_NIC_IP parameter.

The value of the MULTICAST_NIC_IP parameter can be a host name or an IP address.

dongqiu
Acclaimed Contributor

Re: PPM 9.14 Multicast issue

Can you give a sample server.conf file with MULTICAST_NIC_IP? I searched 9.10 and 9.20 AdminInstallation guide but didn't find it.

 

 

I assume this has to be in @node section. Does this look corret?

com.kintana.core.server.MULTICAST_NIC_IP=192.168.x.x

 

I know this is an old post. But we are having the same problem and searching for answer.

 

cshsleesam
Acclaimed Contributor

Re: PPM 9.14 Multicast issue

I know I'm late to this party, but we're having the same issue.  Per HP support, there are 3 multicast channels PPM uses.  One of them has a bug where an HP Developer hard-coded the TTL to 1.  Yeah, HP code review & QA missed that one.

cshsleesam
Acclaimed Contributor

Re: PPM 9.14 Multicast issue

BTW.. the fix they recommend to hardcode the multicast NIC IP didn't work for us.  we're on, win2k8 VM,  single-nic, PPM v9.14

Loc_Nguyen_PPM
Occasional Visitor

Re: PPM 9.14 Multicast issue

Hi everyone,

 

I found it in our knowledge base.

 

It would be a good idea following these steps in order to check if you see any difference:

1. Stop PPM server.
2. Delete work & tmp folders from PPM_HOME/server/SERVER_NAME (repeat for all nodes)
3. Add the parameters to server.conf
MULTICAST_ WARNING_MINUTES=0
4. From the command prompt under PPM_HOME/bin run: sh kJSPCompiler.sh
5. Re-start PPM server
6. From the command prompt under PPM_HOME/bin run: sh kRunCacheManager.sh
7. Clean browser and java cache.

I would also like to try recompiling the HTML pages, so will need to run the kUpdateHtml.sh

Please review the following article about the multicast settings:

http://support.openview.hp.com/selfsolve/document/KM1083941

Also, check that both nodes are on the same subnet and use the same gateway. If they are, then use one of the following tests to show if the multicast is actually working on these servers. Once it is confirmed that the multicast tests are successful, the multicast error messages should stop occurring in the logs.

A. Run Multicast Test from the Command Line

Step 1: On each machine where a PPM_HOME is present, get a location for the jgroups-2.6.15.GA.jar file

For example:
D:\PPM_HOME\server\HPPPM_91\lib\jgroups-2.6.15.GA.jar

NOTE: For earlier versions, the file is PPM_HOME/server/kintana/deploy/itg.war/WEB-INF/lib/jgroups-all.jar

Step 2: On Windows, go under Start | Control Panel | System | Advanced tab | Environment Variables button

Step 3: Add the file to the Classpath variables (may have to click on New)

Step 4: Reboot the machine to refresh System Variables (not always necessary; can try closing any existing command windows and re-opening them before running test)
Step 5: Open a new Command Prompt to see jgroups version by using the command:
java org.jgroups.Version

NOTE: For UNIX, in command window, to set classpath would use “export” command
For example:
$ export CLASSPATH=/home/ppm/server/HPPPM_91\lib\jgroups-2.6.15.GA.jar

Step 6: Start a Multicast Receiver using an IP/port combination:
java org.jgroups.tests.McastReceiverTest -mcast_addr 225.39.39.067 -port 9000

NOTE: If any PPM nodes are running using the same IP/port combination, will see messages in Window, so will want to stop PPM nodes using the IP/port combination or use a unique pair for testing

Unique for each cluster:
com.kintana.core.server.MULTICAST_IP=225.39.39.067
com.kintana.core.server.MULTICAST_PORT=9000

cache.conf uses MULTICAST_IP and port 46545:
protocol.stack=UDP(mcast_addr=${MCAST_ADDR};mcast_port=46545;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):\

Unique for each node in cluster:
com.kintana.core.server.APP_SERVER_MULTICAST_PORT=6613

Can add the IP of the NIC, that PPM is bound with, as an additional parameter. By specifiying the IP (PPM's license IP/each physical machines NIC binding instead of nn.nn.nn.nn), can ensure bound to the same NIC.

For example:
java org.jgroups.tests.McastReceiverTest -mcast_addr 225.39.39.067 -port 9000 -bind_addr nn.nn.nn.nn

Step 7: Start a Multicast Sender using the same IP/port combination to send a test message with the following command:
java org.jgroups.tests.McastSenderTest -mcast_addr 225.39.39.067 -port 9000

Can add the IP of the NIC, that PPM is bound with, as an additional parameter. By specifiying the IP(PPM's license IP/each physical machines NIC binding instead of nn.nn.nn.nn), can ensure bound to the same NIC.

For example:
java org.jgroups.tests.McastSenderTest -mcast_addr 225.39.39.067 -port 9000 -bind_addr nn.nn.nn.nn

NOTE: Can then type text and will see the message in the Multicast Receiver command window

Attached document shows screenshots of such test described above.

B. Use a multicast ping utility
ssmping is a tool for checking whether one can receive SSM from a given host. If a host runs ssmpingd, users on other hosts can use the ssmping client tool to test whether they can receive SSM from the host.

The tool is available on the following link:
http://www.venaas.no/multicast/ssmping/

It is also available on the following link:
http://www.venaas.no/multicast/ssmping/

“HP Support
If you find that this or any post resolves your issue, please be sure to mark it as an accepted solution.”
5keeve
Super Collector

Re: PPM 9.14 Multicast issue

So I did the Sender/Receiver tests and they work.

 

Anyhow I still see

 

ERROR server:ClusterMonitor.HeartbeatThread:com.kintana.core.server.cluster.MulticastCommsFailureMonitor:2014/10/01-16:03:45.188 CEST: No multicast traffic has been heard from node mynode1 on the MULTICAST_PORT port for over 2 minutes even though the node appears to be up. Please check your multicast routing, server.conf settings, and/or firewall settings.

 

 What else can we do to troubleshoot?

Changfa
Acclaimed Contributor

Re: PPM 9.14 Multicast issue

My PPM application with two servers and three user nodes, one user node on the primary service server and two user nodes on the back up service server. the primary service and user node 1 suddenly cannot be seen on the workbench, which is saying that they are not available. Also see this message on the backup service node, 

ERROR server:ClusterMonitor.HeartbeatThread:com.kintana.core.server.cluster.MulticastCommsFailureMonitor:2017/03/05-08:02:27.083 CST: No multicast traffic has been heard from node Services_Primary on the MULTICAST_PORT port for over 3 minutes even though the node appears to be up. Please check your multicast routing, server.conf settings, and/or firewall settings.

There is no server.conf change. there was server patch. 

I see a setting, com.kintana.core.server.MULTICAST_IP=237.0.0.3 and  com.kintana.core.server.MULTICAST_PORT=9003, in server.conf file, com.kintana.core.server.APP_SERVER_MULTICAST_PORT=9101.  Is the IP= 237.0.0.3 a real one on our network? I cannot ping this IP. Does this IP address cause the multicast failure?

Thanks.

//Add this to "OnDomLoad" event