Updated the IMC ODC bitmask to again include the min trans power checks - not sure why they went missing. Also updated the cds/h1/scripts/h1setODCbitmask script.
After about 30 mins of disk activity h1boot completed its reboot. Here is the current status of the systems:
DAQ
DAQ Frame Data Was Lost. Here are the frame gaps on the two frame writers
frame writer | from (gps local) | to (gps local) |
h1fw0 | 1079622528 08:10PDT | 1079631872 10:45PTD |
h1fw1 | 1079623744 08:30PDT | 1079631936 10:45PDT |
So no data from 08:30 to 10:45PDT. We should investigate if we cannot make the DAQ more independent of h1boot. During run time the DAQ logs to the target area, and I assume that is why the frame writer stop times are after h1boot's demise and at different times.
At 11:18 I performed a clean restart of the DAQ since the framewriters and broadcaster had restarted themselves. All came back ok except:
h1lsc model is showing a DAQ error in its STATEWORD. No error from h1dc0 on this node.
h1broadcaster0 shows the same swap memory problems, I rebooted this unit to clear the error. We will add more memory soon to fix this.
Front Ends
All the front ends EPICS updates became unfrozen, no restarts were needed. The DAQ restart cleared the h1iopoaf0 0x2000 status (model change last week) but an h1lsc DAQ error is showing (see above)
Workstations
Good news on the workstations, they all became unfrozen and appear to be operational. Some wall mount TV servers seem to be not operational, we can fix these tomorrow.
Remote Monitoring
The Vacuum and other MEDM screens are being updated on the cds web page.
Remote access using 2FA now working.
SysAdmin
Looking at dmesg and logs on h1boot I cannot determine if a full FSCK was ran. We should still schedule this for Tuesday.
I have switched the RSYNC backup of h1boot by cdsfs1 on again.
IFO
I'll leave the verification of IFO operations to the commissioning crew to check.
at 10:09PDT I rebooted h1boot via front panel reset button after finding its console locked up. For the past 20 mins it looks like it is file checking the mirror raid, those two disks are continuously active.
The vacuum overview medm on the VE workstation is live and everything looks good.
The DAQ units are updating their UPTIME counters, but it appears that h1broadcaster0 is down, so most probably no DMT data is being sent. I'll check the frame file status when h1boot is healthy.
MEDMs associated with front ends are visible, but frozen at 06:42:21PDT this morning (GPS time 1079617357).
The frame writers file listings I'm getting from h1fw0 and h1fw1 are confusing. h1fw0 says it last wrote a frame at 08:10PDT, and h1fw1 has a last time of 08:30PDT. I'm not sure why these are exactly on a 10 minute boundary (only second trends use that time) and I'm not sure why they would stop at different times and over an hour from the h1boot problem.
The external epics gateway is still running, I am able to report CP levels are good:
HVE-LY:CP1_LT100 91.5751
HVE-LX:CP2_LT150 92.5519
HVE-MY:CP3_LT200 90.9646
HVE-MY:CP4_LT250 91.6972
HVE-MX:CP5_LT300 91.5751
HVE-MX:CP6_LT350 91.5751
HVE-EY:CP7_LT400 92.7961
HVE-EX:CP8_LT500 91.3309
I'm noticing a possible problem with h1boot, the NFS server of the /opt/rtcds file system. Machines which mount this file system are not letting me log in (they freeze after accepting my password) and h1boot is not responding to ping requests.
The disk-to-disk backup of h1boot at 05:00 this morning completed normally at 05:04. MEDM snap shot images suggest the problem appeared at 06:42 this morning.
As Kyle mentioned in his alog of Friday he is running an accumulation measurement in Y1. Alarms were expected as the pressure rises in the module and this morning we have them. Pressures are at 4e-8 so there is no cause for worry. The rate of rise is close to 1e-8 torr per day. Monday this measurement will be terminated if all goes well.
model restarts logged for Sat 22/Mar/2014
2014_03_22 12:33 h1fw1
unexpected restart of h1fw1.
HEPI TFs compete, ISI tripped from damping during but SUS & TMS good.
ISI TFs now started on opsws1.
(Sheila, Daniel)
We used the normalized PDH signal to feed back to the additive offset of the common mode servo which in turn moves the mode cleaner and the laser frequency. The loop bandwidth was 20Hz with a 1/f shape. This kept the red laser light on resonance in the arm cavity for good. We then swept the offset of the normalized PDH error signal to scan the cavity resonance. The attached plot shows the sweep. It took about 20 seconds. The FWHM as measured by the red transmitted light corresponds to 49.7 units. However, it should be 165Hz. The calibration factor was set to 0.208 (from 0.063).
This measurement was made with 2 stages of whitening on REFL_9 and 36dB of whitening gain.
In the refl DC bias path we used a gain of 50, 6dB input gain on the common mode board input 2, with negative polarity. We has the boost on the COMM PLL off, and used common compensation on the common mode board.
Attached is the measurement of the gain in the DC bias path.
The cavity pole is really 42 Hz, so the calibration should be set to 0.106.
Aidan, Dave H.
We thoroughly wiped downThe CO2PX table and enclosure today. Most of the plumbing is installed - we're going to connect the laser and RF driver and test for leaks on Monday.
The chiller and CO2 laser current and voltage calibrations have been applied to the H1TCSCS front end model. I recorded a safe burt restore snap with these calibrations and the chillers set up to 20C.
The L1-TCS-R1 rack was installed in the HAM4-BSC2-BSC3 vertex and a cable tray was run to it.
We're getting a large shipment of the remaining CO2P equipment on Monday.
We removed all ISC cables from the J-clips, inserted PEEK tubings for a better protection against the edge of the clips, and changed the clips such that two clips securely hold each cable. Thin cables and thick ones needed somewhat different treatments but in the end everything was fixed securely. Corey should have taken before/after photos.
This changed how the cables were routed to the top, and thus changed the PIT and YAW, but it turns out that we didn't have to rebalance anything, just a little push from bias slider was all it needed (15 urad in PIT and 70 in YAW or something like that).
We checked that all picos work.
We checked that beam diverter reed switches work. However, open/close logic seems to be reversed in Beckhof.
We centered BOSEMs and set the depth according to the offset that were already put in. Supposedly these offsets are half the open value that were measured at some point.
There is no apparent interference.
Before/after photos of how cables were clamped on TMS Table & a photo of where the "traveling screw" was put to finely balance the table.
Have also posted EY TMS photos in Resource Space, here.
Two Matlab sessions on opsws0--please avoid this if possible.
Please log entries into the End Station--likely won't bother the TF as long as you don't bump anything and trip things. If you see the ETMY QUAD, HEPI or ISI, tripped please give me a call so I can restart these this weekend. I'll be in to start ISI TFs but the sooner the better.
Though not likely-depending upon the alarm values used for Y1(?)-PT124b and/or PT243B could alarm (RED) at some point as the result of the pressure (monitored) increase in the Y1 BT module before this test is complete. If so, no action is required.
Jeff spelled me for the morning so I could do EY TMS work (Thanks, Jeff!)
EY Work:
TMS final tasks completed at lunch today. In the afternoon the traffic jam starts: Pcal camera alignment check, B&K Hammer tests, Jax working on ISCT-EY table. Jim is waiting to unlock the ISI & start TFs.
Here are the days activities
Corey, Sheila
Corey is recovering HAM3 from a HEPI L4C trip. We are also going to raise the threshold for the L4C WD to 99000.
If this level is good enough to protect the suspensions, it seems like it should be permanently at this level to avoid unnecessary trips. How can we make this change permanent?
I'm using the same filters for ITMX pit as ETM pit, here are spectra showing the performance. In the red traces both Pit and Yaw damping are on, in the blue traces only Yaw damping is on.
The second screen shot shows the Coil driver outputs, the dashed lines are with only Yaw damping on, the solid lines are with both PIT and Yaw damping on. I will leave all four OpLev admping loops running for now, ITMX+ETMX PIT+Yaw.
After Travis had notified me of a problem on ETMy where the copper clamps on the ring heater were touching when it was moved into final position I went in chamber and made some adjustments to keep the upper and lower clamps separated. Decided to check if the same problem existed on ITMx and ITMy. Unfortunately, ITMy had part of its macor break while I was adjusting the copper clamps. ITMx had the glass former break sometime after it's installation onto the lower quad. Both lower ring heaters have been removed.
[ FYI ... There is a specially designed Ring Heater Segment Replacement Fixture (D1101253), which is to be used ANY time a segment needs to be removed from a QUAD, if a dummy mass or a TM is also on the QUAD ]
Note, the ITMx unit has already been stripped of it's lowest dummy mass in prep to load the new glass mass later this week. So, the ITMx unit had extremely easy access for this RH work and no fixturing was required.
It's worse than I originally thought. The glass former broke along with the macor on the ITMy lower ring heater.
Apparently, damage was done (also) to the very tip-end of the glass former when the adjustment was made (14-March) to the copper clamp plates of the lower RH segment (assembly D1001895-v8 SN-210) on the ITM-Y quad. This damage was not revealed until the lower segment was dis-assembled. Photos are attached.
Regarding the lower ITM-Y assembly issues, see the attached (PDF) package of images.
The following feedback (attached) has been received, as guidance, from SYS
Here are the central syslogs for the event.
Mar 23 06:42:18 h1boot kernel: [12441189.877539] CPU 0
Mar 23 06:42:18 h1boot kernel: [12441189.877544] Modules linked in:
Mar 23 06:42:18 h1boot kernel: [12441189.877953]
Mar 23 06:42:18 h1boot kernel: [12441189.878157] Pid: 4652, comm: nfsd Not tainted 2.6.34.1 #7 X8DTU/X8DTU
Mar 23 06:42:18 h1boot kernel: [12441189.878369] RIP: 0010:[<ffffffff8102f70e>] [<ffffffff8102f70e>] find_busiest_group+0x3bc/0x784
Mar 23 06:42:18 h1boot kernel: [12441189.878785] RSP: 0018:ffff8801b9cefa60 EFLAGS: 00010046
Mar 23 06:42:18 h1boot kernel: [12441189.878993] RAX: 0000000000000000 RBX: ffff880001e0e3c0 RCX: 0000000000000000
Mar 23 06:42:18 h1boot kernel: [12441189.879403] RDX: 0000000000000000 RSI: ffff880001e0e4d0 RDI: 00000000fe15cf79
Mar 23 06:45:25 script0 kernel: [12378753.814330] nfs: server h1boot not responding, still trying
Mar 23 06:45:27 script0 kernel: [12378755.721463] nfs: server h1boot not responding, still trying