Daniel, Sheila, Dave:
Daniel's latest h1sqz model was installed (an ADC channel shuffle). It did not require a DAQ restart. Sheila's latest filter changes were loaded on restart.
FAMIS 6932 The following appear elevated: BS: ST1 H1, V1 & ST2 H1, V1 ETMY: ST2 V2 ITMX: ST1 V2, ST2 H3 ITMY: ST1 V2, ST2 V2 HAM3: V1
Attached is a picture of MC2 Trans QPD during the initial alignment when I devised a setup that used a red laser pointer to mimic the IMC beam, to align the MC2 Trans QPD.
In the images, there are three beams.
Through careful evaluation, Keita and I determined that the upper left beam is the second internal reflection from MC2 (vertical wedge), the far right beam is a reflection from the black glass that's behind the curved steering mirror to the QPD, and the center beam (slightly clipped on the edge of the QPD aperture, since it was not yet aligned) is the real MC2 transmitted beam that we want to center on the MC2 Trans QPD.
Tests to evaluate the likelihood that the alignment into the IMC could be altered in such a way that one of the two beams we do not want to use ends up on on MC2 Trans QPD are under way.
confirmation that currently the wrong beam is centered on MC2 trans: on the left: refl has a beam, MC2 Trans is less than 0.15 on each quadrant: on the right: refl has no ligh, MC2 Trans quadrants are reading signals around 1.5
Realized my snapshot of StripTool did not include enough info - updated dataviewer plot attached.
Attached is the final in-air IO alignment. Time stamp in UTC is 12/18/2017 at 23:30. This is when the ISS second loop qpd was aligned, and the pitch and yaw signals were (near) zero, the IMC was flashing, and there was a good signal on IM4 Trans QPD.
(Ed, Daniel)
SQZ Chassis 1: SHG CM board - Wires for slow path compensation (L19 O1) and boost were switched (L19 O2)
SQZ Chassis 2: CLF CM board - Wires for slow path filter (L16 O3) and fast path limiter were rewired to DB37 slot 10, pins 37 and 19, respectively. (They were on pins 18 and 37.)
SQZ Chassis 2: The TEC controller power cables were added - The LEDs in the back are now lit up!
Leap seconds need updating. I get this warning every time I run "z step". Not a big deal, but it's filling up my terminal windows.
/ligo/apps/linux-x86_64/gpstime/lib/python2.7/site-packages/gpstime-0.1.2-py2.7.egg/gpstime/__init__.py:150: RuntimeWarning: Leap second data is expired.
Run 'update_leapdata()' to download the latest bulletin from the IETF
RuntimeWarning, stacklevel=1)
D. Sigg, E. Merilh, M. Pirello
Exchanged Timing Comparator S1107952 with modified S1201227. This modification adds frequency counter channels to the timing comparator, see ECR.
ECR can be found here E1700246
The TwinCAT software and medm screens were updated as well.
Daniel, Nutsinee
Here's a quick note of what we measured out there with the RF power coming in through (and monitored via) temporary cables. These RF power drive AOM1 and AOM2. There seems to be too much loss at Pmon2 compared to Pmon1 (almost 11 dBm loss versus 5dBm loss). The cable length going from CPL to Pmon are the same. The top RF input gets its power all the way from the CER while the bottom RF input gets its power from the SQZ rack. So the difference in the incoming RF power makes sense. Not sure if the factor of two difference coming straight out of the Cpl makes sense though.
Terry, Nutsinee
After realizing that the way the cables were hooked up didn't match the wiring diagram we decided to make it right once and for all. The output from the top RF amp used to drive AOM2, now it's driving AOM1 and the 2dB attenuator was taken off. The RF power measured on the table through the helix cable was 34 dB (2.5W), still within the max drive power allowed (2.9W). AOM1 (IntraAction ATM-200) diffraction efficiency is now 82%.
The bottom RF amp is now driving AOM2 (AA Opto Electronic MT200). The 6dB attenuator is still there. The power measured on the table was 33.2 dB (2.1W). The maximum power allows is 2.2W. The efficiency of AOM2 is still not great currently. The best we've had was 70%.
Daniel, Sheila, Terry, Nutsinee
We hooked temporary extension cables to bring stuff from SQZ rack to the table. Below are the list of what temporary labels correspond to. Note that all the temporary cables are 50 feet long.
TNC 1 = SQ_202_2 (goes on SQZT6. Not hooked up to anything at the moment)
TNC 2 = SQ_203_2 (ISCT6 RF1)
TNC 3 = SQ_12_2 (OPO phase mod)
TNC 4 = SQ_14_2 (SHG phase mod)
TNC 5 = SQ_21_2 (RF power input, U4)
TNC 6 = SQ_212 (TTFSS LO)
TNC 7 = SQ_229 (RF power input, U2)
TNC 8 = SQ_353 (CPL U4)
TNC 9 = SQ_370 (TTFSS mon)
TNC 10 = SQ_372 (CPL U1)
-------------- TNC11-TNC19 are hooked up to DCPD patch panel --------------
TNC 11 = 375_1 (fiber rejected)
TNC 12 = 376_1 (CLF launch)
TNC 13 = 377_1 (CLF rejected)
TNC 14 = 378_1 (SHG launch)
TNC 15 = 379_1 (SHG rejected)
TNC 16 = 380_1 (Seed launch)
TNC 17 = 381_1 (LO launch)
TNC 18 = 382_1 (this one doesn't go anywhere on the table, according to the wiring diagram.)
TNC 19 = 383_1 (fiber trans)
-------------- Leftovers --------------
TNC 20 = used to bring back SHG PDH signal from the demod on the SQZ rack
This is for a leak that was noticed last week. Jeff Bartlett noticed a large puddle forming under the chillers. When I went to investigate I noticed that the X chiller water level was low, I pulled off the panels and saw where a small leak had been dripping on top of the radiator for quite some time. I think this explains the difference in the refill logs for the past couple of months of operation. You can also see how there is sediment forming where the leak was evaporating. I pulled the motor and water pump from the chiller body, separated the water pump, and got the seal where you can see the ceramic seal had become chipped. I will try and find a suitable replacement so this chiller will be a functional spare. Meanwhile, the spare is waiting on some new quick-disconnects to arrive as the previous ones had been cross-threaded and I didn't trust them to hold a good seal.
Sheila, Jenne
We had a large EQ at just about 19:00 UTC, a couple of ISIs tripped but no suspensions so far. I set the seismic configuration to LARGE_EQ_NO_BRSXY after some BSC ISIs had already tripped, while I was doing that some more BSC ISIs tripped as well as BS HEPI, but I don't know if the change in state caused the trips or the earthquake did it. I reset all of the ISI watchdogs, they were various triggers for the trips including GS13s, ST2 CPS, T240s and ST1 actuators. A few minutes later (perhaps at 19:05) ITMX and ITMY tripped again. I have just reset them now at about 19:09, however they are not re-isolating because the guardians are waiting for the T240s to settle.
This should be some interesting data to see how the changes in the ISI models have changed the way things respond.
Our seismic FOM is not updating even when I hit update, and Terramon is down, but the USGS says there is a 7.6 in Honduras.
At around 19:35 ITMX and ITMY tripped to damping only again, both because of the T240s and ST2 CPSs. I am going to leave things this way and head home.
Because it wasn't mentioned here, I want to bump my alog 38921 , where I detail the VERY_LARGE_EQ button on the SEI_CONF screen. This is probably the kind of earthquake we should use that button for. Unfortunately, it probably wouldn't have worked this time, as we had changed the names of the DAMPED state for the chambers. I've updated that now to take all of the ISI's to ISI_DAMPED_HEPI_OFFLINE.
To reiterate what the button does:
1. Turns off all the sensor correction via the SEI_CONF guardian.
2. Switches all the ISI chamber guardians to ISI_DAMPED_HEPI_OFFLINE
3. Switches all the GS13s to low gain (except the BS & HAM6), and all BSC ST1 L4Cs to low gain as well
4. Puts HAM1 HEPI in READY
Also, Jenne reported that the local SEISMON code gave a verbal alarm about 3 minutes before the S-waves arrived and ISIs started tripping. If the code is alive, I'm not surprised it reported the earthquake before USGS or the Terramon webpages.
I took a look at the BS HEPI trip. This trip is very clearly caused by saturations in the vertical actuators. More work is needed to figure out what to do, but I put together a set of plots which show why I think the actuators signals generate the trips.
I'll note that saturated hydraulic actuators should be treated seriously, and are a good reason to turn things off.
But, hopefully we can keep this from happening in a smarter way than just turning everything off.
[TVo, Niko, Jenne]
This morning, Hugh and Cheryl found that IOT2 was still sitting on its wheels - its feet hadn't been put down yet. So, Hugh helped TVo and Niko get the feet set down. TVo, Niko and I then tweaked up the alignment of the IMC Reft and Trans paths.
Later, Sheila pointed out that likely the lexan cover was still in place in the Refl path. So, TVo and Niko removed it (WP 7276) and put on the dust cover in its place. We decided that the lexan probably only needed to be removed for Refl, since that is used for feedback. The Trans path is just used for triggering and a camera, so it less critical noise-wise. (Also, we only found one of the dust cover things that goes in place of the lexan.) We the re-tweaked the Refl path alignment, although it needed very little. The spot on the IMC Refl camera looks much more normal now, which is also good.
We have tried a few times to close the WFS loops, and they keep diverging even though our hand alignment has brought the error signals close-ish to zero. So, we checked the phasing of the WFS by driving MC2 in length and maximizing the I phase signal in each quadrant of each WFS. This didn't change much though.
I have to go, and we just got a juicy earthquake (7.8 in Honduras, seismic systems are tripping, and we're just getting the S and P waves, Rayleigh should be here in ~10 min. Sheila is putting us in the LargeEQ state). So, we'll come back to this IMC work in the morning, but IMC MC2 Trans Sum is up to a max of 91 now, which is way better than the ~15 we had this afternoon. I think (but haven't trended to actually check yet) that we should be getting something like 150 counts on MC2 Trans Sum with 2W PSL power.
On Monday Ken reconnected GV4's motor and encoder power cables after removing to test fit shroud. MEDM status is back to RED (was yellow). Remains LOTO.
Sheila, Terry, Nutsinee, Daniel
We were unable to drive a voltage to the SHG TEC. There is an error in the SQZ chassis 2 wiring list E1600384 missing the power cable for the TEC controllers. The TEC readbacks also suffers from some typos.
Some more info on the TEC work today:
23:08 Travis out of LVEA
23:11 Gerardo out to LVEA by CP1
23:13 Mark and Tyler are done at EY
23:15 Corey out of optics lab
23:18 Marc and Daniel into CER
23:25 Gerardo out
conlog-master.log: 2018-01-09T19:52:31.208080Z 4 Execute INSERT INTO events (pv_name, time_stamp, event_type, has_data, data) VALUES('H1:SQZ-SPARE_FLIPPER_1_NAME', '1515527550857386596', 'update', 1, '{"type":"DBR_STS_STRING","count":1,"value":["��'�"],"alarm_status":"NO_ALARM","alarm_severity":"NO_ALARM"}') 2018-01-09T19:52:31.208301Z 4 Query rollback syslog: Jan 9 11:52:31 conlog-master systemd[1]: Unit conlog.service entered failed state. conlog.log: Jan 9 11:52:31 conlog-master conlogd[10598]: terminate called after throwing an instance of 'sql::SQLException' Jan 9 11:52:31 conlog-master conlogd[10598]: what(): Invalid JSON text: "Invalid escape character in string." at position 44 in value for column 'events.data'.
Suspect that it occurred with a Beckhoff restart.
Restarted and updated channel list. 59 channels added. 25 channels removed. List attached.
Found it crashed again, same issue, different channel: 2018-01-10T00:32:45.744823Z 5 Execute INSERT INTO events (pv_name, time_stamp, event_type, has_data, data) VALUES('H1:SQZ-LO_FLIPPER_NAME', '1515544365629095108', 'update', 1, '{"type":"DBR_STS_STRING","count":1,"value":["@@@e?@@@j�"],"alarm_status":"NO_ALARM","alarm_severity":"NO_ALARM"}')