From Polygame
Revision as of 07:29, 17 May 2017 by Syke (talk | contribs) (HPE Blade tools)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Polygame servers

Address policy

10.254.N.10-240 - Servers

  • N=0, S-Mikro-Management
  • N=1, S-Mikro-iLO
  • N=2, S-Mikro-SAN
  • N=3, S-Mikro-VPN
  • N=10, JT11-Crack-Management
  • N=11, JT11-Crack-iLO
  • N=12, JT11-Crack-SAN
  • N=13, JT11-Crack-VPN

10.255.N.10-240 - Network equipment

See Addresses



Polygame co-location site at Jämeräntaival 11. Constructed during years 2011-20XX. The name comes from the first impression of the room before the construction.

Servers at Crackluola


As a research project, Polygame servers have been allowed to be installed to Comnet co-location. The purpose is to facilitate co-location infrastructure (power consumption and environment measurement) and Internet gaming research. All traffic to and from the servers is logged for later analysis. The traffic data is anonymized. For more information on Comnet research projects, contact Comnet staff. (Riba, Skebaristi, Puhuri, Prof. Manner)

Servers at Comnet

Blade servers

HP C7000 Blade upgrade

Firmware upgrade procedure

OA factory reset (5 sec reset button, serial console input), OA firmware upgrade.

HP FC-Enet switch modules upgrade requires specific version of VC ALL firmware package, since v3.60 dropped support on 1/10G switch. VCSU software requires IP connectivity to OA. Run VCSU interactive mode, command Update and follow questions.

Cisco CBS3020 update: cisco console cable, bootROM flash erase, upload XMODEM firmware's .BIN file, set BOOT=flash:<firmwarefilename>.

OA factory reset, enable EAIPB and set first switch&server address to get others sequentaly from it.

VC-ENET setup (server offline):

- Create a vNet (domain).
- VCManager screen, define Ethernet Network, add uplink ports to this network.
- VCManager screen, define Server Profile, add Network Port 1 (eth0) to vNet and Bay 1.
- VCManager screen, assign Server Profile to server bay.

HPE Blade tools


Install HPE repo keys:

Add to /etc/apt/sources.list

 deb jessie/current non-free


apt-get install hpacucli
apt-get install hp-health

Generate RAID contoller report

hpacucli ctrl all diag file=/tmp/

Email report back to HP. You can of course view it first should you want it:

vim ADUReport.txt

Using hpacucli

NB. Did you know, you can do all sorts of funky stuff with hpacucli?

Either run commands to get output that can be feed to monitoring scripts:

/usr/sbin/hpacucli ctrl slot=0 physicaldrive all show status
/usr/sbin/hpacucli ctrl slot=0 logicaldrive all show status
/usr/sbin/hpacucli ctrl slot=0 array all show status

or run it interactively:


=> ctrl all show config

=> ctrl all show status

Smart Array P400 in Slot 0
  Controller Status: OK
  Cache Status: OK

=> set target ctrl slot=0

"controller slot=0"

=> show config detail

Smart Array P400 in Slot 0
  Bus Interface: PCI
  Slot: 2
  Serial Number: xxxxxxxxxxxxxxxx
  Cache Serial Number: xxxxxxxxxxx
  RAID 6 (ADG) Status: Disableds
  Controller Status: OK
  Chassis Slot:
  Hardware Revision: Rev E
  Firmware Version: 5.20
  Rebuild Priority: Medium
  Expand Priority: Medium
  Surface Scan Delay: 15 sec
  Cache Board Present: True
  Cache Status: OK
  Accelerator Ratio: 100% Read / 0% Write
  Drive Write Cache: Disabled
  Total Cache Size: 256 MB
  Battery Pack Count: 0
  SATA NCQ Supported: True

=> physicaldrive all show status

  physicaldrive 1I:1:1 (port 1I:box 1:bay 1, 450 GB): OK
  physicaldrive 1I:1:2 (port 1I:box 1:bay 2, 450 GB): OK
  physicaldrive 1I:1:3 (port 1I:box 1:bay 3, 450 GB): OK
  physicaldrive 1I:1:4 (port 1I:box 1:bay 4, 450 GB): OK
  physicaldrive 1I:1:5 (port 1I:box 1:bay 5, 450 GB): OK
  physicaldrive 1I:1:6 (port 1I:box 1:bay 6, 450 GB): OK
  physicaldrive 1I:1:7 (port 1I:box 1:bay 7, 450 GB): OK
  physicaldrive 1I:1:8 (port 1I:box 1:bay 8, 450 GB): OK
  physicaldrive 1I:1:9 (port 1I:box 1:bay 9, 450 GB): OK
  physicaldrive 1I:1:10 (port 1I:box 1:bay 10, 450 GB): OK
  physicaldrive 1I:1:11 (port 1I:box 1:bay 11, 450 GB, spare): OK
  physicaldrive 1I:1:12 (port 1I:box 1:bay 12, 450 GB, active spare): OK

=> array all show status

array AOK

=> logicaldrive all show status

  logicaldrive 1 (3.7 TB, 5): OK
If you don’t have battery or it takes long to replace it then you should enable no-battery write cache.

Enable RAID write-cache even if raid battery failed

ctrl all show detail
hpacucli ctrl slot=0 modify nbwc=enable
hpacucli ctrl slot=0 modify dwc=enable forced

Erase Physical Drive

Execute the following command to erase a physical drive in array B on slot 0.

=> ctrl slot=0 pd 2I:1:6 modify erase

Blink Physical Disk LED

To blink the LED on the physical drives for the logical drive 2, do the following. This will make the LEDs blink on all the physical drives that belongs to logical drive 2.

=> ctrl slot=0 ld 2 modify led=on

Once you know which drive belongs to logical drive 2, turn the LED blinking off as shown below.

=> ctrl slot=0 ld 2 modify led=off

Using hp-health