I monitor 22 sites with 1G interfaces back to a 10G Datacenter. I have observed through PRTG for the last 3 months that the 10G connection never exceeds 1G Traffic and the 2 highest use sites drop SNMP traffic data when they reach over 250Mps. I think the line is saturated and the dropped packets are evidence of the saturation.
The service provider has tested their 10G interface and can push that much data through. I have worked with my enterprise switch vendor and the configurations seem to be correct. At this point my proof is just the lost packets in the WLAN Traffic sensor and reports of slow network during that time. Monitoring the 10G sensor on the data center switch does not show traffic over 950 Mbps
Am I correct to assume that the lost packets would occur because higher priority traffic is getting through and the sites competing for the largest amounts of bandwidth would show this while the others would not?
Background - I do not VLAN anything. I segment the network with private subnets at each site with firewalls that pass all but broadcast traffic. I have a private MAN through Cox Cable service provider that connects all sites. All Internet traffic and data center traffic passes through the 10G connection.
Add comment