Announcement

Collapse
No announcement yet.

Space between devices in racks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Space between devices in racks

    How much space do you leave between devices in Racks, is there a best practice for this? I can see in the data centre that some devices have 1U and others are all touching.
    Steve Jordan
    Business Development Manager
    Hyve Secure Cloud Hosting

  • #2
    I guess it is going to depend on the cooling and air-flow in your data-suite? won't this depend also on the model of servers that you operate? I.e. how the air is pulled from the front to the back of the racks?
    Ricky Blaikie - Senior Hosting Consultant -Host Consult Ltd
    TEL: +44 (0) 20 3002 7992 WEB: http://www.host-consult.net
    * Colocation * Dedicated and Cloud Hosting * Connectivity * VDC * VOIP *

    Comment


    • #3
      Depends what's in them, I've seen the odd rack (cabinet) holding one or two mahoosive blade centres and nothing else!
      Dominic Taylor - Director, Paragon Internet Group Ltd
      - UK hosting from £17/year +VAT
      Tel: 0800 862 0248
      Company No. 7573953

      Comment


      • #4
        What I meant to ask was, is there a set of recognised standards/guidelines/formulas?

        I guess I should ask the DC team!
        Last edited by stevejordan; 8th November 2012, 05:29 PM.
        Steve Jordan
        Business Development Manager
        Hyve Secure Cloud Hosting

        Comment


        • #5
          My guys just informed me that its all down to the power allocation per cabinet. Our GS2 site currently has 8KW per cab, so we install with no gaps. High Density. If you have lower power allocations, then you will most likely need a 1U gap between hardware.
          Steve Jordan
          Business Development Manager
          Hyve Secure Cloud Hosting

          Comment


          • #6
            but why?

            edit: I mean, why are you enquiring? Is this cooling related?
            Last edited by Ricky; 8th November 2012, 06:01 PM.
            Ricky Blaikie - Senior Hosting Consultant -Host Consult Ltd
            TEL: +44 (0) 20 3002 7992 WEB: http://www.host-consult.net
            * Colocation * Dedicated and Cloud Hosting * Connectivity * VDC * VOIP *

            Comment


            • #7
              Hi Ricky, I'm just trying to find out the actual recommended standard for DC installations. Research really.

              We always have plastic strips across any U gaps, so the cool air will always pass through the hardware to get to the back which is the hot side, the DC guys monitor the cooling and we have temperature checks all over the suite, but I was actually thinking more along the lines of vibrations on the disk shelves. If you have them touching will it actually reduce the life span of the drives? Especially sensitive Fibre Channel drives?

              Cheers
              Steve Jordan
              Business Development Manager
              Hyve Secure Cloud Hosting

              Comment


              • #8
                Originally posted by stevejordan View Post
                Hi Ricky, I'm just trying to find out the actual recommended standard for DC installations. Research really.

                We always have plastic strips across any U gaps, so the cool air will always pass through the hardware to get to the back which is the hot side, the DC guys monitor the cooling and we have temperature checks all over the suite, but I was actually thinking more along the lines of vibrations on the disk shelves. If you have them touching will it actually reduce the life span of the drives? Especially sensitive Fibre Channel drives?

                Cheers
                Hi Steve,

                It shouldn't do, the servers are designed to be rack-dense after all!

                Ricky
                Ricky Blaikie - Senior Hosting Consultant -Host Consult Ltd
                TEL: +44 (0) 20 3002 7992 WEB: http://www.host-consult.net
                * Colocation * Dedicated and Cloud Hosting * Connectivity * VDC * VOIP *

                Comment


                • #9
                  The only large gaps we have between our kit are for the switches which are mounted mid rack. Otherwise, everything is vertically adjacent. Even with our blade systems.

                  But I don't think any of out kit actually touches another server, there is always a ~2mm+ gap between the systems anyway just because of the server height and rail positions.
                  Benjamin Lessani

                  sonassi| Magento Hosting | High Performance. Expert Support
                  Sonassi Limited registered in Manchester No. 07715859. Registered office: 1st Floor, 14 Exchange Quay, Salford Quays, M5 3EQ. VAT number GB 101 263 474. Phone: 0161 870 2414.

                  StackExchangeTwitter

                  Comment


                  • #10
                    I don't think there is any sort of standard for this although I will happily stand corrected if there is.
                    ••• Mark Castle •••

                    Comment


                    • #11
                      From what I have seen it really seems to be down to whoever is doing it... Some DCs I've been in look like they just threw a truckload of racks and servers in it and whereever they landed, they stayed, others are much more precise with everything in-line and careful airflow planning.
                      Dan Rodgers - Director & Domains Expert - allthe.domains
                      Domains | UK SSD Hosting | Fully Managed WordPress
                      :: Nominet & CentralNic Accredited :: 800+ Domain Extensions Supported :: 24x7x365 UK-based Support ::
                      Views are my own and not those of my employer or any other company I own/represent.

                      Comment


                      • #12
                        Irrespective of the room cooling capability, or the delivered power to a rack, there are some devices and systems that have a tendency to cook when stacked in every U or suffer vibration problems - dell 1425s used to die frequently unless you left 1u between each one, the R2/860 series were much better with the top one of 8 getting hot, but at 6 stacked they were fine.

                        HP Dl360s stacked in every u seem to go through a lot of drives, spacing them out helps.

                        And of course there's the aesthetics plus the number of power outlets and patch panels to take into account ...

                        IME clients that put more than ~20 systems to a rack tend to need to visit more frequently to fix / replace things - but that could simply be because they have twice the kit in there

                        Also, if not all the cases are the same depth, venting designs vary, and access to cabling can be an issue.

                        Usually there are enough "devices" along with the kit that a 1u gap every 2 or 4 machines is possible
                        Rob Golding Astutium Ltd AS#29527 Company#08183381 Phone#020 3475 2555
                        Domain Name Registration - uk domains just £5.55/2 years | DNS Services | Web Hosting from £2.95 | Minecraft Servers from £2.50
                        London Docklands Colocation 1u £49.95 | Virtual Private Servers from £4 | Virtual Dedicated Servers from £12 | Managed + Unmanaged Dedicated Servers from £69
                        Make more money from domains - Talk to me about our Domain Reseller Accounts and WHMCS modules

                        Comment


                        • #13
                          1u servers are designed to be high density with pure front to back air flow. Variable fan controllers manage the chassis temp and ensure all of the hot air is exhausted through the rear of the rack so that very little is conducted between servers. We run very high density racks and have never had any issues.

                          As Rob correctly says, you do need to be careful mixing server depths within a rack as that can then lead to 'hot spots'. It's better if you can to stick to one type of chassis per rack. It's also critical that you're not drawing warm air in from the front of the rack - this is particularly difficult in badly designed co-lo facilities. This is how a typical rack at Tsohost looks.



                          Please ignore the orange light. I believe that was a fan failure (system was running on 7 of its 8 fans).

                          Comment


                          • #14
                            I'd say no gaps is best practice.

                            101 physics, air will take the path of least resistance.

                            Assuming you want ALL your cold air to go through your servers rather than around them.
                            Servers do suck air some people believe a server will outperform a gap, but when you start looking at AC fans doing 24,000 cubic meters a minute you soon realise gaps are bad.

                            DC AC is normally pulling air 3-4 times harder than the servers push it.

                            When you have switches, and variable depth gear if you have no gaps you can end up with hot spots so gaps here are a good idea.

                            Another important factor which seems to get lost these days is actually removing the hot air from the empty void at the back of the rack.
                            Many installations have 1000mm deep racks with 500mm deep gear in and a 500mm void behind it. I think the heat should be forcibly removed from the rack by the server fans into the corridor so it can be sucked up by the AC.
                            Gary Coates - ServerHouse Ltd
                            Established Colocation provider, Running Two Tier II & Two III data centres from two diverse sites in Hampshire. Bespoke complex managed hosting, 24x7 IT and resilient business connectivity from 100Mbs
                            Tel: 01329 800911 - www.serverhouse.co.uk

                            Comment


                            • #15
                              That's settled then!
                              Ricky Blaikie - Senior Hosting Consultant -Host Consult Ltd
                              TEL: +44 (0) 20 3002 7992 WEB: http://www.host-consult.net
                              * Colocation * Dedicated and Cloud Hosting * Connectivity * VDC * VOIP *

                              Comment

                              Working...
                              X