This is a short and simple request. One of my Twitter acquaintances is taking the first steps in what can be a very tricky endeavor - moving his server-room/data-center to a completely new location. He'll be able to design and plan from the ground up, since the new location has been completely gutted.
I'm sure that many of us have gone through this ever-so-special experience, so my request is simple: Please share your tips, tricks, perils and pitfalls from, shall we say, the Ghosts of ServerMoves Past in the comments. What saved you significant time or effort? What did you miss that came back to bite you? Was animal sacrifice required? Let us know...
I'm sure that many of us have gone through this ever-so-special experience, so my request is simple: Please share your tips, tricks, perils and pitfalls from, shall we say, the Ghosts of ServerMoves Past in the comments. What saved you significant time or effort? What did you miss that came back to bite you? Was animal sacrifice required? Let us know...
6 comments:
Currently still at the stage of us working out the logistics since we're not just an 'office'. We're an entire circuit board manufacturing plant. So this would prove to be quite the project if we do choose the move option.
I've done this twice, with a fairly small server-room operation (30 or so servers, plus core switches/routers)...a few points:
1) PLAN YOUR AIR CONDITIONING! We had one move where some bean-counter "just decided" that we didn't really need the particular model of A/C unit we specified. Naturally, we wound up going back and installing a second A/C unit to handle the load.
2) Do you support a test/development environment? This is the perfect opportunity to isolate your test/dev network from the production network. You can simply run a completely separate set of punchdowns/racks/switches with a single 'gateway' point into production...
3) Plan for growth, and get the infrastructure now. Some folks go by category (how many more servers? how many more network drops?), while others have just slapped the same "room to grow" factor on everything ("figure 30% growth in all areas").
4) What can you "preposition" before the move? In one case, we were able to put temporary servers in place for key services (e.g. DHCP, DNS, et al.), so that end users could "move their own stuff" while we moved the server room. That was a big timesaver...
That's it for now...
Wes has some good points. Other things to consider are:
1. Get the facilities setup for your needs. Things like raised floors to run wiring and cooling through can help make the space more usable and future-proof.
2. Prepare for disasters. Can you get better fire suppression systems that do not use water for the server rooms? Design them in now. Also, alternative power sources ... can you get 2 different main connections or a generator installed?
3. Temporary Bandwidth Between Locations. Can a point to point or VPN connection be setup between your old and new facilities? This will help in the move. Could even transfer VM's over it or data if the bandwidth is there.
Those are some of my ideas so far.
Wes and Jared both have good points.
1) A/C. Don't just plan for capacity, plan for redundancy. We had an issue where they put in 2 units for redundancy. A single unit could handle the load on an average day, but in the summers it couldn't handle the extra heat bleeding in through the walls (110+ in TX makes for warm server rooms). If you lose that single unit and the secondary cannot handle the load, it will get hot.
2) POWER... Make sure you plan for power, plus extra for growth, and spikes. We didn't, and when it came to put in more power, we had to upgrade the UPS, as well as schedule an outage for 6 blocks of cubes because the engineer put the server room on the same transformer as a handful of cubes. Future planning would have made is to that the cubes and servers would have been on different transformers from the beginning.
3) POWER. When specifying that you want 2 power connectors to each rack, explicitly define those connections as CIRCUITS. My boss didn't, and we discovered later that the "redundant" power to the rack was only at the rack PDU level. The engineer ran a single circuit with 2 sockets to each rack. This meant when we came to do the power upgrade, we had to power down the whole rack.
4) WALLS. This seems silly, but it is a gotcha we didn't account for. If you are building from scratch, see if they can run the walls from floor to ceiling. Ours were run to the height required for a drop ceiling, this allowed for heat to sneak in.
5) Plan for maintenance. Depending on how all your other stuff is installed, plan for easy access. The folks that did the A/C units for us put one of them above the racks, the other was above the wall between the server room and kitchen.
Don't ask about the a/c above the racks, or why the server room was next to the kitchen. This is one of the outcomes of being shoehorned with the left-overs after they let the building planners work on the rest of it first.
Something else that I didn't even know about until I met with the Contractor today, at least here in Oregon if not elsewhere, is an Electrical Code that requires 'decommissioning'. That is, a tenant is required to rip out any low voltage wiring, AC ducting, etc that was done specifically for them while they were occupying a building. So, something that has to be planned out also is not just the logistics of wiring a new facility but the logistics of how to remove all the wiring from the old facility and what to do with it all, ie recycling.
you are absolutely right Moving a server room is not easy i like your blog.
learn free Moving Tips & hacks.
Post a Comment