Difference between revisions of "Zippy"

From CHG-Wiki
Jump to navigationJump to search
(Added VM info.)
m
Line 135: Line 135:
 
! Notes:  
 
! Notes:  
 
|-
 
|-
|[[chg-helmet]]
+
|[[helmet]]
 
|128.111.234.246
 
|128.111.234.246
 
|Shrad's sandbox VM
 
|Shrad's sandbox VM
 
|-
 
|-
|[[chg-fez]]
+
|[[fez]]
 
|128.111.234.169
 
|128.111.234.169
 
|Ederer testing machine
 
|Ederer testing machine

Revision as of 12:01, 7 April 2015

Zippy is CHG's freshly redone SunFire X4600 workhorse server. Some functionality of the old Zippy has beem shifted over to rain.

Please note zippy's reboot sequence below.

General Info

System Name: zippy.geog.ucsb.edu
IP: 128.111.234.240
IPMI IP: 128.111.234.239
Location: EH 1609
UCID #: 088000240
Grant #: (need info)
Serial #: (need info)
General Purpose: Workhorse server
Purchase Date: (need info)
Delivery Date: (need info)
Vendor: (need info)
Contract #: (need info)
Support Expires: (need info)

System Configuration

  • OS Type: Unix
  • OS Version: Red Hat Linux (expired license)
  • CPU info: 32xAMD 2.86GHz (may only be 16 with two cores)
  • Chassis Specs: SunFire X4600
    • Defunct RAID chunks (gibber, jabber, jower [HD: 25TB, RAID6])

Network

  • NIC speed: Gigabit
  • MAC Address - eth0: 00:14:4F:D1:E9:10
  • MAC Address - eth1: (need info)

Storage

  • Memory: 160GB (DDR2?)
  • HD: 25TB, RAID6

Reboot Sequence

Zippy needs a very specific order to its reboot process.

  1. Shut down zippy with shutdown -h now
  2. Manually (physically) shut down external RAID arrays (they will not shut down with zippy).
  3. Manually power zippy on, wait for system to completely load.
  4. Once zippy's system is completely up, power on external RAID arrays.
  5. Remount RAID arrays.

Services

Recently rebuild, TDB. Formerly Lis, CSCD1 (most tasks shifted to rain).

Notes on October 2013 Redo

Troubleshooting

Timed zippy's startup at two and a half minutes (booted much faster when we removed some of the RAM/CPUs). Had some graphical issues - screen would go black before kickstarter could really initialize. The system seemed to lock up when trying to access disks or the SAS configuration utility with a live (but blank) screen. After swapping around drives and taking out various cards, switched to a standalone monitor as opposed to KVM, which resolved the issue.

Day 1

First attempt (specifying a build on sda,sdb) came up with error about not enough disk space, so tried (sdb,sdc) but that failed with no device found. Added the --all to the initpart in kickstart and then it hung at the storage device initialization. Left for 45 minutes with blank "live" screen, but it was still unbootable afterwards. Subsequent attempts did the same.

We tried a number of variations and had one pass where CPUs 8,9,10,11,12,14 were hung. The boot process on the following pass seemed much faster for some reason. Had a few other passes where there were other odd little inconsistencies.

We tried a subsequent pass with all but two of the CPU cards pulled and another with the fiber channel controllers pulled, with no change in results.

Day 2

Attempt to access the LSI card BIOS, but got same black screen hang when the config utility started (like the one we got when the jumpstart install got to the disk config).

Tried disabling PXE boot stuff, but after a full power cycle, it seemed like it started attempting the PXE boots again.

ILOM Info

Serial Connection

Connect using 9600,N,8,1

cd /SP/users/root
show
set password=newpass

Networking

Set the pending network settings (you can't set the regular ones)

cd /SP/network
set pendingipaddress=128.111.101.114
...

Now commit the changes

set commitpending=true

Check that settings updated

show

Reboot the Service Processor (equivalent to a full power up (i.e.: plugging in initially))

reset /SP

To stop/start the system

stop /SYS
start /SYS

To get at console (NOTE: Had issues escaping the console)

start /SP/console

Go in via SSH and kill the console

stop /SP/console

Network Management Port

Once configured:

ssh root@address

then things are just like the serial console...

Links

  • ILOM help: [1]
  • Notes ganked from [2].

VMs

System Name: IP: Notes:
helmet 128.111.234.246 Shrad's sandbox VM
fez 128.111.234.169 Ederer testing machine

Notes

  • RMA'd [?] drive to HGST (Hitachi) on July 7th, 2014 (used ubu shelf spare to replace).