PPGPCR works excellently under most scenarios and conditions. That being said, PPGPCR is not well suited should you have the following network:
[DC] Many endpoints
When PPGPCR is used in PUSH mode (with PPGPCR Auditor), we rely upon the in-system function GPRESULT /X to generate RSOPs then send them up to the PPGPCR server for storage and processing.
When we do this, about 10MB of data is sent over the network. This is the very nature of GPRESULT /X.
You can do a rough calculation of how long PPGPCR might take to push data from the endpoints up to the server based upon:
Using this calculator: http://ibeast.com/tools/band-calc.asp
Estimate about 10MB per endpoint for each GP update. So for example if you had 9 computers over a 1.5MB link to the closest DC, you could estimate that the upload would take 8 minutes and 8 seconds (Screenshot: http://screencast.com/t/KzLNHY4vcPJZ )
The important points here are:
PPGPCR has a Heartbeat so we can keep the server updated.
Therefore:
Note: Values may change slightly from run to run, but in summary: after a gpupdate, PPGPCR Auditor takes about 10 MB of network bandwidth on the next auditor run regardless of anything, because of the need to generate a new RSOP.
The biggest problem, again, is that PPGPCR Auditor relies upon GPRESULT /X, which is a system command, and is hardcoded the way it works, and as such, is the bulk of the bandwidth.
We know PPGPCR has this as a problem where bandwidth is constrained between the client and the DCs. We’re working on ways to minimize the problem in future releases.