Ops Shift Log: 10/24/2016, Owl Shift 07:00 β 15:00 (00:00 - 08:00) Time - UTC (PT) State of H1: IFO is locked at 26.9W, with 69.6MPc. Intent Bit: Observing Wind: Gentle to Moderate Breeze (8-18mph) 0.03 β 0.1Hz: EY at 0.01um/s, EX at 0.07um/s, CS at 0.09um/s 0.1 β 0.3Hz: All around 0.3um/s Outgoing Operator: Nutsinee Incoming Operator: Ed Activity Log: Time - UTC (PT) 07:00 (00:00) Take over from Nutsinee 08:55 (01:55) PI Mode-18 ringing up. Changed Phase from -100 to -350 to suppress the ring up 09:00 (02:00) PI Mode-28 ringing up. Changes Phase from 60 to -80 to suppress the ring up 11:13 (04:13) PI Mode-18 ringing up. Changed Phase from -350 to -200 to suppress the ring up 15:00 (08:00) Turn over to Ed Shift Details: Support: Sheila (by phone) Shift Summary: IFO locked in NOMINAL_LOW_NOISE for the past 2 hours. All appears normal. PI Mode-18 started to ring up. Changed Phase from -100 to -350 to suppress it. Mode-28 is elevated bit still below 1. Changes itβs phase from 60 to -80 to suppress the ring up. IFO has been locked in Observing mode all shift. Environmental conditions are good. The wind has dropped off to near zero and microseism is low. The 0.1-0.3 band has been ringing up for the past two hours, but is still below 0.5um/s.
Attached is the voltage noise spectrum of the Kepco BHK 500-0.4MG power supply that provides the high voltage for the pre-modecleaner and injection locking. I need to check with Marc to see what the voltage monitor divider ratio was.
TITLE: 10/23 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing
INCOMING OPERATOR: Jeff B.
SHIFT SUMMARY: Commissioning most of the evening. We were down for a bit due to an earthquake. No problem locking tonight. Intent bit has been set.
LOG:
2:41 Daniel and Stefan going to end stations to bang on the beam tube
~3:50 Daniel, Stefan back.
6:36 Intent bit set. Enjoy the data!
Sheila, Terra
We changed the bias on ETMX and watched the 60Hz and 120 Hz lines in DARM. The 60 Hz line was reduced by 50% by switching from +400V to -400V, which the 120Hz line was reduced by almost a factor of 2. Apparent changes in other parts of the spectrum weren't reproducible.
We set it back to +400V
Daniel, Stefan,
With the interferometer in a decent noise state at 25W, we went to both end stations with a calibrated rubber mallot (actually a shovel a rubber cover on the handle).
We first shuttered the PCAL lasers because we didn't want to short-circuit scatter measurments.
Upon banging the vacuum vessel the noise increased in a fairly broadband manner. On top of that some individual beaks at ~90Hz and ~110Hz also were rung up.
The worst spot in both end stations was the periscope supporting PCAL and camera. ETMX seemed to be slightly more sensitive.
Attached is a reference spectrum showing the sensitivity when we did the injections, including references from the best H1 the recent best L1.
POP_A_RF9_I to SRCL matrix element is no -0.025. This cleaned up at least the SRCL error point a little. THe change is in the guardian.
We've been locked at 26 Watts looking at low frequency noise, so just to make things easier I cut the HARD pit loops off a little bit lower.
I lowered the gain of DHARD P from 30 to 20, I lowered CHARD P from -0.15 to -0.14. I replaces the JLP25 filters with second order elipticals at 10 Hz. I put this in the guardian to happen when we are locked below 45 Watts.
This helps DARM from about 15-27 Hz. In the attached screenshot you can see how DARM and the coherences changed.
In the past few weeks and months it has been becoming increasingly clear that we are limited by beam jitter when going to higher power. The beam jitter requirements were derived assuming an rms misalignment of the test masses of 10-9 rad rms in units of divergence angle or beam radius; see T0900142. The coupling mechanism is through the symmetric alignment matrix which mixes TEM00 and TEM10 modes. This transforms jitter in the first order mode back to zeroth order where it beats against the carrier field to produce an intensity fluctuation. This mechanism is independent of the OMC, since the intensity noise is generated in the interferometer. The jitter requirement into the input mode cleaner is:
We can compare this with the measurement from alog 30237 in the attached pdf. The solid olive curve represents the requirement which is a factor of 10 below the level (dashed olive line) one would expect the noise to be visible in the gravitational wave readout at full power and assuming a 10-9 rms misalignment.
The following observations can be made:
Possible alternative explanations to the visible HPO jitter in the gravitational wave readout at 50 W input:
16:44 Begin bringing back IFO from computer crash.
17:06 Begin aligning/locking process
17:48 on the hone with Dave Barker about a ALS_XARM guardian node I was having trouble restarting. Ezca connection error to H1:ALS-X_WFS_SWITCH channel
22:00 Damping ITMY bounce mode. Flipped sign on gain and added .1 gain
22:40 damped PI mode 27 by flipping the sign and then adding 30 degrees to the phase.
22:58 PI Mode 27 needed another sign flip
22:47 NLN ≈60MPc. Handing off to Nutsinee
BruCo report for last night lock can be found here:
https://ldas-jobs.ligo.caltech.edu/~gabriele.vajente/bruco_1161255617/
Ed, Sheila
Trouble with LOCKING_ALS sent us looking for HV settings on ETMX. It seemed that the L3 stages were set to LO volts rather than HI. We believe what happened when we toggled the L3 LL HI/LO control that the state of the UL changed as well. We're also not sure why SDF didn't grab this change and we couldn't find it in SDF with a search.
A similar thing:
OMCDCPD whitening settings were incorrect and the DCPD_MATRIX elements were all zero. these record did exist in SDF, but were set to zero in SDF.
This morning started with a smorgasboard of troubles. Patrick aLogged what happened there. After we seemingly got everything back up there were still some lingering issues with connections/channels that were finally resolved through a half-dozen or so phone calls with Dave Barker. His aLogs should show the gory details. I'm finally tying to get things re-aligned so I can get this ship sailing again.
Existing MEDMs continued to be connected to h1iscex, but no new connections were possible. Also I was unable to ping or ssh to h1iscex on the FE-LAN. This also meant that the Dolphin manager was unable to put this node into an offline state. The only recourse was to put SUS-EX and SEI-EX into a safe state and remotely power cycle h1iscex via its IPMI management port. As expected, this in turn glitched the attached Dolphin nodes in the EX-Fabric (h1susex and h1seiex). I restarted all the models on these two systems and Ed is now recovering EX.
at approximately 07:50 PDT this morning the /opt/rtcds file system (served by h1fs0) became full. This caused some front end epics processes to segfault (example dmesg output for h1susb123 shown below). Presumably these models epics processes were trying to do some file access at this time. The CDS overview is attached showing which specific models had problems. At this point guardian stopped running because it could not connect to critical frontends. Lockloss_shutter_check also reported an NDS error at this time (log shown below), further investigation is warrented since h1nds0 was running at the time.
On trying to restart h1susitmx, the errors showed that /opt/rtcds was full. This is a ZFS file system, served by h1fs0. I first attempted to delete some old target_archive directories, but ran into file-system-full errors when running the 'rm' command. As root, I manually destroyed all the ZFS Snapshots for the month of May 2016. This freed up 22.3GB of disk which permitted me to start the failed models.
Note that only the model EPICS processes had failed, the front end cores were still running. However in order to cleanly restart the models I first issued a 'killh1modelname' and then ran 'starth1modelname'. Restarting h1psliss did not trip any shutters and the PSL was operational at all times.
I've handed the front ends over to Patrick and Ed for IFO locking, I'll work on file system cleanup in the background.
I've opened FRS6488 to prevent a re-occurance of this
[1989275.036661] h1susitmxepics[25707]: segfault at 0 ip 00007fd13403c894 sp 00007fffb426b9a0 error 4 in libc-2.10.1.so[7fd133fda000+14c000]
[1989275.045095] h1susitmxepics used greatest stack depth: 2984 bytes left
[1989275.086076] h1susbsepics[25384]: segfault at 0 ip 00007f2a5348e894 sp 00007fff908c88e0 error 4 in libc-2.10.1.so[7f2a5342c000+14c000]
[1989275.127643] h1susitmyepics[25166]: segfault at 0 ip 00007f5905a59894 sp 00007fff20f878d0 error 4 in libc-2.10.1.so[7f59059f7000+14c000]
2016-10-23T14:51:50.62907 LOCKLOSS_SHUTTER_CHECK W: Traceback (most recent call last):
2016-10-23T14:51:50.62909 File "/ligo/apps/linux-x86_64/guardian-1.0.2/lib/python2.7/site-packages/guardian/worker.py", line 461, in run
2016-10-23T14:51:50.62910 retval = statefunc()
2016-10-23T14:51:50.62910 File "/opt/rtcds/userapps/release/isc/h1/guardian/LOCKLOSS_SHUTTER_CHECK.py", line 50, in run
2016-10-23T14:51:50.62911 gs13data = cdu.getdata(['H1:ISI-HAM6_BLND_GS13Z_IN1_DQ','H1:SYS-MOTION_C_SHUTTER_G_TRIGGER_VOLTS'],12,self.timenow-10)
2016-10-23T14:51:50.62911 File "/ligo/apps/linux-x86_64/cdsutils/lib/python2.7/site-packages/cdsutils/getdata.py", line 78, in getdata
2016-10-23T14:51:50.62912 for buf in conn.iterate(*args):
2016-10-23T14:51:50.62912 RuntimeError: Requested data were not found.
2016-10-23T14:51:50.62913
Started Beckhoff SDF for h1ecatc1 PLC2, h1ecatx1 PLC2, and h1ecaty1 PLC2 by following the instructions at the end of this wiki: https://lhocds.ligo-wa.caltech.edu/wiki/UpdateChanListBeckhoffSDFSystems controls@h1build ~ 0$ starth1sysecatc1plc2sdf h1sysecatc1plc2sdfepics: no process found Specified filename iocH1.log does not exist. h1sysecatc1plc2sdfepics H1 IOC Server started controls@h1build ~ 0$ starth1sysecatx1plc2sdf h1sysecatx1plc2sdfepics: no process found Specified filename iocH1.log does not exist. h1sysecatx1plc2sdfepics H1 IOC Server started controls@h1build ~ 0$ starth1sysecaty1plc2sdf h1sysecaty1plc2sdfepics: no process found Specified filename iocH1.log does not exist. h1sysecaty1plc2sdfepics H1 IOC Server started
Plot 1 shows the DC signals of all 4 I segments of ASA36. Note that seg 3 is ~2.5 times larger than the others.
Plot 2 shows the updated AS_A_RF36_I matrix - the gains for seg 3 have been dropped to -0.4 from -1.
Plot 3 shows the resulting error signal - it now cresses zero where the buildups and couplings for SRCL are good.
Closed the SRC1 PIT and YAW loops with a gain of 10, and input matrix element of 1. I will leave this setting for the night - although it is not in guardian yet.
I accepted the funny matrix in SDF, and added this in the SRM ASC high power state. The loops should only come on for input powers less than 35 Watts. Nutsinee and I tested it once.
TITLE: 10/22 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Commissioning
INCOMING OPERATOR: Patrick
SHIFT SUMMARY:
I would recomend that instead of putting the extra factor of 2 gain in PRCL2, people double the gain in PRCL 1. The guardian doesn't touch the gain in PRCL2, but it will later in the locking sequence adjsut PRCL1 to be the nominal value. If people adjust the gain in PRCL2 and forget to rest it to 1, this can cause problems.
If we consistently are needing higher gain, there is a parameter in LSC params that adjusts the gains for lock acquisition.
Keita, Sheila, Kiwamu
This afternoon Keita and I did another test of opening and closing DBB shutters, since Keita realized that there are multiple shutters that matter. The results are in the screen shot. We only see two shutters on the PSL layout, (SH01 in the 35 W path to the DBB and SH02 in the 200W path) and on the photos documenting the table, but there must be a third shutter, perhaps inside the DBB box. We did not test switching to the 35W beam because that caused a lockloss last night. Apparently the shutters used here are Thorlabs SH05, which has an aluminum blade according the the thorlabs website.
When changing between shutter states today we saw a broad band change in the DARM noise throughout our 200Hz-1kHz lump, (this is a little different from what we saw last night). However, we saw 3 different noise states in DARM depending on the shutter requests, shown in the attachment.
shutter open | shutter closed | |
no beam | worst | best |
200 W |
intermediate | worst |
We guess that the shutter which is controlled by the epics channel "PSL-DBB_SHUTTER_DBB" is inside the DBB box itself and not on the layout. It is hard to explain the table above. For example, if no beam is selected, and both beams are really blocked before they reach the DBB, why would closing the shutter inside the DBB matter?
Kiwmau and I went inside the PSL, placed beam dumps in the paths to the DBB. We placed a "black hole" beam dump (no cone in the middle) in the HPO path (a 250 mW beam between M3 and M12 on the layout). Looking at that beam with an IR card, we could see a corona around it, pictures will be attached to this alog. This corona is scattered around, hitting the black baffle near the DBB apperature and other things. We also placed a black glass beam dump upstream of the front end laser beam path the the DBB, just before the shutter.
Update:
After waiting out the Japanese earthquake, we relocked. The lump was smaller in the first moments of the lock. After a few minutes the peaks reappeared, but the peaks still changed when we opened and closed the shutters, in a way that is repeatable although the plot is confusing since the overall level of noise was changing probably with the thermal state. (each time we opened the shutter, things got worse than they had been).
We do not understand how changing the shutter state impacts the DARM noise, although we think we have ruled out scattered light. Kiwamu thought that perhaps the change in the noise could be due to the change in the diffracted power when we move the shutter (see Keita's alog 30679). We tried a test of changing the diffracted power, which unlocked the IFO. It could also be through the same electrical coupling that means the diffracted power changes when we open the shutter.
One consequence of these table layout modifications is that we've lost the signal that monitors the output of the high power oscillator.
Keita suggested that one non optical way that shutter states could impact DARM is if somehow the shutters move more when open than closed. I had a look at accelerometers on the PSL table (table 1). There is coherence of around 0.3 between this accelerometer and DARM at the frqeuency of the peaks which depend on the shutter state. However, there was no difference in the coherence or the spectrum of the accelerometer when the shutters were open. It seems unlikely this is a mechanical coupling.
Also, The second attachment shows a trend of the power out of the PSL as we changed the shutters (DBB_SHUTTER controls SH01 and SH02, the two that are outside the DBB, 0 is both closed 1 is 200W beam open; DBB_SHUTTER_DBB is the one that must be inside the DBB box itself.) The two shutters we switched both reduced the output power by about 1.5Watts, and the impact is additive (both shutters closed is about twice as much power lost as either one of the shutter closed.)
However, as shown in the plot in the original post the noise impact is not additive, both shutters open is slightly less noisy than either single shutter open.
I didn't add the attachment to the above alog showing the power out of the PSL change as we opened and closed shutters on the DBB.
Here is one, which shows both the ISS diffraction changing, and a laser power monitor