Announcement

Collapse
No announcement yet.

Private Colo - 1/3 racks, and what does 1/4 rack actually mean?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AbigailF
    replied
    I think 1/3 rack is very influential in the decision making times for people.
    People who want something that is better than 1/4 rack (Which is less by certain standards) but don't want to invest in full rack or 1/2 rack can go for this.
    So 1/3 rack is viable as well.

    Regards.

    Leave a comment:


  • othelloRob
    replied
    Originally posted by serverhouse View Post
    For a big fuse to blow that would be bad management
    Until the aircon cleared it, bits of the 5th floor of east smelt like bonfire night

    Leave a comment:


  • othelloRob
    replied
    Originally posted by Ed-Freethought View Post
    So it's not really suitable for an environment where the facility operator isn't in complete control of the racks...
    It's not suitable for a 'sub-tennant' setup, but then it was designed for in-house computer rooms not colo type facilities ...

    Leave a comment:


  • Ed-Freethought
    replied
    Originally posted by othelloRob View Post
    Properly managed electrak works fine if you manage the L and R sides correctly, breakers on the bars in the racks, properly rated fuses on all kit don't 'cross' supply racks from different YRB phases, neevr let a single rack draw >30 amps etc
    So it's not really suitable for an environment where the facility operator isn't in complete control of the racks...

    Leave a comment:


  • serverhouse
    replied
    Originally posted by othelloRob View Post
    I've seen entire suites at TH go off when the PDU blew a 'big fuse' - there are always point of failure.

    Properly managed electrak works fine if you manage the L and R sides correctly, breakers on the bars in the racks, properly rated fuses on all kit don't 'cross' supply racks from different YRB phases, neevr let a single rack draw >30 amps etc

    For a big fuse to blow that would be bad management too, which historically Telehouse suffered a lot of, mostly because they'd sell 500w racks and EVERYONE was putting 2kW in them and TH didn't bill, limit or monitor that for a number of years.

    A lot of it depends on past experience, if you have used a system and managed it correctly then its good, even the best system in the world can be miss-managed and subject to human error, hence why Tier IV DC's still can't guarantee 100% true uptime for eternity.

    Kinda like having a crash in a BMW and then refusing to buy another BMW because they crash, wasn't the cars fault.

    Leave a comment:


  • serverhouse
    replied
    Originally posted by Karl
    One of the things customers tell us they don't like about Amazon, they never know what their bill will be - especially as you need to factor some unknowns in like disk i/o ops etc. I was talking to a customer about this today.
    We hear that to, we often give people the option to change after 30 days as others have said fixed costs are easier to work to, but smaller clients will look for value.

    Leave a comment:


  • othelloRob
    replied
    Originally posted by Ed-Freethought View Post
    Yeah, I've seen whole banks of desks in a callcentre go off due to fault somewhere in an Electrak system. It's a nice concept, but not sure I'd ever want to put one in for exactly that reason.
    I've seen entire suites at TH go off when the PDU blew a 'big fuse' - there are always point of failure.

    Properly managed electrak works fine if you manage the L and R sides correctly, breakers on the bars in the racks, properly rated fuses on all kit don't 'cross' supply racks from different YRB phases, neevr let a single rack draw >30 amps etc

    Leave a comment:


  • othelloRob
    replied
    Originally posted by mfollett View Post
    I am not aware though of any co-location service where you can co-locate say a 1U server in a secured section of a rack and pay for your actual power consumption. Data centres predict and pay for their power in advance, but it should be straight forward to offer far more flexible packages. We have seen such changes to the rest of hosting, so it would be good to see the same for co-location.
    IME having offered this exact method over 5 years ago, is that corporate clients hated it - they wanted to know in advance what the bill was going to be each month.

    Leave a comment:


  • Ed-Freethought
    replied
    Originally posted by SteveWright View Post
    Personally never been a big fan of any shared busbar system whether that be in a DC or office environments.

    I recall a funny happening on Thames Valley Park where the shower in the Websense office leaked out of the shower room, a long the corridor, and on to one of the office busbars. That made for some interesting panic and noises... oh, and half the electrics in the office having to be replaced
    Yeah, I've seen whole banks of desks in a callcentre go off due to fault somewhere in an Electrak system. It's a nice concept, but not sure I'd ever want to put one in for exactly that reason.

    Leave a comment:


  • SteveWright
    replied
    Originally posted by Ed-Freethought View Post
    They've got an Electrak system, so multiple racks off one shared feed. They've had all sorts of problems over the years with failures taking out multiple racks.
    Personally never been a big fan of any shared busbar system whether that be in a DC or office environments.

    I recall a funny happening on Thames Valley Park where the shower in the Websense office leaked out of the shower room, a long the corridor, and on to one of the office busbars. That made for some interesting panic and noises... oh, and half the electrics in the office having to be replaced

    Leave a comment:


  • Ed-Freethought
    replied
    Originally posted by serverhouse View Post
    I didn't notice anything on their webpage about breakers.... but they're a bit to long a commute for me, I'm also not a big fan of water for fire (and IT hardware usability) suppression ;-)
    They've got an Electrak system, so multiple racks off one shared feed. They've had all sorts of problems over the years with failures taking out multiple racks.

    Leave a comment:


  • serverhouse
    replied
    Originally posted by Ed-Freethought View Post
    Just look at iPHouse
    I didn't notice anything on their webpage about breakers.... but they're a bit to long a commute for me, I'm also not a big fan of water for fire (and IT hardware usability) suppression ;-)

    Leave a comment:


  • Ed-Freethought
    replied
    Originally posted by serverhouse View Post
    We don't share breakers either.... very bad idea
    Just look at iPHouse

    Leave a comment:


  • serverhouse
    replied
    We do a product call 'Pay for Power' whereby we charge a fixed minimum of 7kW/h per U and you can have as many U as you care to pay for, then anything you use above that 7kW/H per U is charged per kW/h.

    The costs work out the same when you scale up to normal usage charges

    We only have a few customers on it, but the concept was to get away from the I have 11U of hardware so I'll have an 11U rack and in 3 months have to move everything to a 15U rack....

    Customers on it thing it's great and nearly always want about 75% spare space for future growth ;-)

    We also charge whole racks by kW/H in a more traditional model.

    We don't share breakers either.... very bad idea

    Leave a comment:


  • Ed-Freethought
    replied
    Originally posted by connected View Post
    Honestly it doesn't and I'm not entirely sure how it benefits anyone.
    Flexibility

    Originally posted by connected View Post
    A DC has to size based on amps or KW.
    Which you can still do quite easily with kWh billing

    Originally posted by connected View Post
    Say on average cooling and ups systems are sized so a average rack is 4-5KW all with 32a feeds. So you take a colo rack and give someone 4u and a pair of C14 sockets (10 amp max draw) and charge by the KWH - lets say the server draws 6a -what if they decide they'll turn it off at night they will only pay 50%/3a usage but will have required you to put aside 6a of capacity. Maybe they'll have 10a of DR servers - only turn them on when they need to in a DR situation and have them cold 90 % of the time you need to size for 10 amps of servers each month but you'll only charge say 1 amp ect
    OK, completely ignoring for a second that the DCaaS billing model is about full racks and not per-U co-location, there's nothing stopping you doing appropriate capacity management independently of billing.

    Originally posted by connected View Post
    It just doesn't help anyone.
    Yes it does

    Originally posted by connected View Post
    It also doesn't help the colo buyer as they'll be on a shared breaker
    All shared co-location providers (which has nothing to do with DCaaS billing for full racks) use shared breakers.

    LDeX DCaaS racks are normal racks on normal, dedicated circuit breakers. It's only billing which is handled differently.

    Originally posted by connected View Post
    with no one really managing current usage they could easily get trips even though the provider is only selling the rack to 50% capacity in strict KWH terms.
    Why would you not be capacity managing the peak current used by any equipment that you installed in a shared rack?

    Originally posted by connected View Post
    Whereas when you buy 1a and 2u the provider takes say 1.5 amps away from their rack capacity and they can then manage it so they don't have capacity issues.
    Which you could still do even if you are billing power on a kWh basis, just boot the server and measure the peak power usage (same as many shared co-location providers do) and use that to manage the equipment installed in the rack versus the size of the feeds.

    When you're being billed with a flexible kWh model, you are always going to pay more for a given amount of power than you would if you paid a flat monthly rate as that flexibility has to be covered somewhere.

    It's the same with AWS EC2 machines - you can pay for them per-hour, but they work out more expensive than the equivalent traditional VPS or dedicated server if you use them 100% of the time as you're paying for the overhead spare capacity which Amazon have to maintain.

    Leave a comment:

Working...
X