NetRiver data center http://www.netriver.net Your High Density Colocation & Cloud Provider Thu, 13 Nov 2014 19:50:53 +0000 en-US hourly 1 http://wordpress.org/?v=4.0.1 Carrier Class Networkshttp://www.netriver.net/carrier-class-networks.html http://www.netriver.net/carrier-class-networks.html#respond Mon, 28 Jan 2013 16:19:59 +0000 http://www.netriver.net/?p=2523 The post Carrier Class Networks appeared first on NetRiver data center.

]]>


The post Carrier Class Networks appeared first on NetRiver data center.

]]>
http://www.netriver.net/carrier-class-networks.html/feed 0
HVAC Set-uphttp://www.netriver.net/hvac-set-up.html http://www.netriver.net/hvac-set-up.html#respond Mon, 28 Jan 2013 16:19:09 +0000 http://www.netriver.net/?p=2521 The post HVAC Set-up appeared first on NetRiver data center.

]]>

The post HVAC Set-up appeared first on NetRiver data center.

]]>
http://www.netriver.net/hvac-set-up.html/feed 0
Power Set-up Tourhttp://www.netriver.net/power-set-up-tour.html http://www.netriver.net/power-set-up-tour.html#respond Mon, 28 Jan 2013 16:17:56 +0000 http://www.netriver.net/?p=2519 The post Power Set-up Tour appeared first on NetRiver data center.

]]>

The post Power Set-up Tour appeared first on NetRiver data center.

]]>
http://www.netriver.net/power-set-up-tour.html/feed 0
Data center tourhttp://www.netriver.net/data-center-tour.html http://www.netriver.net/data-center-tour.html#respond Mon, 28 Jan 2013 16:15:37 +0000 http://www.netriver.net/?p=2517 The post Data center tour appeared first on NetRiver data center.

]]>

The post Data center tour appeared first on NetRiver data center.

]]>
http://www.netriver.net/data-center-tour.html/feed 0
Data Center Manager Videohttp://www.netriver.net/data-center-manager-video.html http://www.netriver.net/data-center-manager-video.html#respond Tue, 07 Jun 2011 21:25:07 +0000 http://www.netriver.net/?p=2410 The post Data Center Manager Video appeared first on NetRiver data center.

]]>


The post Data Center Manager Video appeared first on NetRiver data center.

]]>
http://www.netriver.net/data-center-manager-video.html/feed 0
Power Usage Effectiveness Estimatorhttp://www.netriver.net/power-usage-effectiveness-estimator.html http://www.netriver.net/power-usage-effectiveness-estimator.html#respond Wed, 10 Nov 2010 20:18:12 +0000 http://www.netriver.net/?p=714 The post Power Usage Effectiveness Estimator appeared first on NetRiver data center.

]]>
Green Grid Power Tool

Green Grid Power Tool

The post Power Usage Effectiveness Estimator appeared first on NetRiver data center.

]]>
http://www.netriver.net/power-usage-effectiveness-estimator.html/feed 0
Data Center Efficiencyhttp://www.netriver.net/data-center-efficiency.html http://www.netriver.net/data-center-efficiency.html#respond Tue, 29 Jun 2010 03:59:16 +0000 http://imediautama.com/demo/vulcan/?p=31 In 2008, NetRiver began the phase II expansion of Seattle (Lynnwood) data center which involved adding an additional 1.8 Megawatts of critical floor load in a Tier III design schematic to the existing data center footprint. The spatial requirements dictated that the design build support a data center floor loading of 500 watts per sq ft, or 3 to 4 times the density requirements found in the existing Seattle market. Specific project goals revolved around addressing the high density requirements in a confined footprint and maximizing the efficiency of the power and HVAC systems to capture PUD rebates and lower operating costs while adhering to the company’s budgetary considerations.

The post Data Center Efficiency appeared first on NetRiver data center.

]]>
In 2008, NetRiver began the phase II expansion of Seattle (Lynnwood) data center which involved adding an additional 1.8 Megawatts of critical floor load in a Tier III design schematic to the existing data center footprint. The spatial requirements dictated that the design build support a data center floor loading of 500 watts per sq ft, or 3 to 4 times the density requirements found in the existing Seattle market. Specific project goals revolved around addressing the high density requirements in a confined footprint and maximizing the efficiency of the power and HVAC systems to capture PUD rebates and lower operating costs while adhering to the company’s budgetary considerations.

As a result of the density requirements, the raised floor method of cooling was abandoned and an overhead duct work distribution scheme was developed to better deliver cooling to the front of the server racks. The larger overhead duct-work columns had significant advantages compared to the raised floor cooling methodology. The overhead distribution of air meant that the duct work registers could be more easily accessed to adjust the airflow to the needs in front of the server cabinet. It also provided a more effective cooling solution to the front of the server cabinet itself with airflow being dumped from above and cold air heavier than warm air, the rack ends up with a more even distribution of airflow across the face of the server cabinet. Effective cooling across the entire cabinet was also complicated by the high density expectations and the larger ductwork rounds would not fit under the raised floor to have ready access; NetRiver would have to be extremely aware of thermal fluctuations as more servers were proposed to fit within a very tight space. A conjunction of hold aisle/cold aisle and cold aisle/cold aisle segmentation was used to segregate the airflow and polarplex curtains to further isolate the hot and cold airflow. Eighteen (18) 30 ton air handlers (CRACs) were deployed to address the floor loading requirements. The CRACs’ were designed to feed into a common ductwork plenum and operate in series and in tandem. Variable frequency drives were paired with the CRAC units to control the rate of spin related to the cooling requirements on the data center floor. N+1 diversity was achieved across the airflow distribution scheme with the use of a common plenum and VFD’s allowing of the loss of any air handler or VFD with the adjoining units able to spin up and carry the additional load.

The cooler weather in the Seattle market provided an ideal environment to use outside air economization so that the CRAC units were able to run in full economization (when the outside ambient air temp was below the temperature set point) or partial economization. Thus, the CRAC units could both run in total and in part off of the redundant chilled water loops fed from the chillers, or directly filter and draw in outside air and promote this air out to the data center floor for any needed cooling and driving down the costs associated with running the chillers. This proved to be a major area of energy savings that was easy to employ given the proposed location of the CRAC units within the NetRiver site and the geographic weather realizations.

NetRiver further deviated from the traditional data center build in its equipment selection by focusing on the use of new technology to achieve efficiency across its chiller plant. The diversity requirements (N+1) for the site meant that chillers in total would only be operating under partial load conditions. Further, outside air economization would reduce the total run time requirements of the chiller plant. As a result of the variable systems and needs, NetRiver opted to go with the addition of two new 250 ton Smardt Turbo Core chillers to the existing chiller plant. These chillers had the advantage of higher initial part load values (IPLV) as compared to typical scroll/screw compressor technologies. The technological advantages of the turbo core chillers also lead to reduced maintenance concerns as there is no oil or lubricant associated with the magnetic bearing compressor sections. Variable frequency drives were added to the pump sections to allow the pump package to operate the chillers in tandem to control both the flow of water only to point needed and operate the chillers in a sequence that maximized their loading efficiency characteristics.

The reliance on new technology became a paramount consideration in the selection of electrical infrastructure for the phase II expansion. The proposed UPS room for the new deployment was too small to accommodate traditional UPS topologies. With space at a premium any larger UPS room requirements would have lead to a reduction in usable data center floor space or a very costly building expansion. Furthermore, the N+1 redundancy requirements dictated that the UPS’s would be operating under partial load considerations. In this respect, it meant that NetRiver’s UPSs’ would be operating in a loading range that was very inefficient. Thus, a smaller compact UPS was needed with further desire for a unit that could operate more efficiently under partial loading considerations. The Eaton 9395 UPS was selected as it aligned well with both of these considerations. The smaller transformerless 9395 UPS allowed NetRiver to maintain the size of the proposed UPS room without any further expansion. The efficiency of the units meant that NetRiver would be operating at 99% efficiency across its loading criteria. This was huge as the legacy UPSs’ on the market were operating at 94-95% efficiency at full loading. At worst, NetRiver was looking at capturing back 4-5% on its UPSs’ with savings expected at $10,000 per percentage point per year.

Having developed a strong working relationship with the PUD (SnoPUD) during the equipment selection of the HVAC systems, NetRiver looked to capture back rebates on the power distribution systems as well. This proved to be an interesting undertaking as the common practice was to offer rebates for savings related only to HVAC systems. However, the new UPSs’ provided a compelling case for SnoPUD to both offer rebates and establish the criteria for offering these energy rebates. Thus, SnoPUD looked to offer incentives more consistent with the energy efficiency of the data center as a whole and not just the HVAC systems. Monitoring meters were placed throughout the facility to validate the metrics advertised by the underlying technologies to some astonishing conclusions.
The PUD calculated the estimated total energy savings at 1.58 million kWh/year. The HVAC systems alone were calculated to be between 40-70% more efficient than other high density server room deployments. This of course was great news for NetRiver and the sweetener came in the form a PUD rebate check for $280,000.00. For the first time a PUD had offered cash back incentives for both HVAC and electrical systems. In total, NetRiver spent $460,000 more to use the newer technologies as compared to the equipment benchmarks. Furthermore, the one-time net cost increase of $180,000 (after incentives) yielded a $103,000 yearly savings on energy consumption. That was a payback of 1.75 years with a ROI of 57%.

What did this all mean?

NetRiver began tracking its PUE (power utilization efficiency) and during the six month window from project completion found that it had a PUE of 1.24. That means it takes 1.24 Megawatts of raw power to yield 1 Megawatt of usable computer grade power. This was a fantastic number for NetRiver and the ultimate verification of the data center’s real efficiency. It was operating at a number significantly less than the rest of the local market with common PUE’s ranging from 1.7 – 2.0. With energy costs expected to consume half of the data centers operating expenditures (OPEX), this also presented a significant competitive advantage compared to the rest of the market. Not only had the energy efficiency strategy captured cash back incentives and paid for itself in a short amount of time, it was allowing NetRiver to do business cheaper with less overhead cost. With energy costs expected to rise in the future, this competitive operating advantage will only be enhanced and NetRiver even better positioned to the rest of the data center market.

The post Data Center Efficiency appeared first on NetRiver data center.

]]>
http://www.netriver.net/data-center-efficiency.html/feed 0