TITLE: 11/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 3mph 3min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.21 μm/s
QUICK SUMMARY:
H1's not made it back to NLN over 24 hrs (with a low duty cycle overall this week[s]). A quick glance shows ALSy is not stable as seen yesterday afternoon. Just chatted with Sheila and she suggested checking stability of the PMC overnight after the PSL NPRO Freq temperature change from yesterday.
Microseism did increase a bit in the last 24hrs, but has also come back down to where it was about 18hrs ago; very low winds.
The Ops Work Station (cdsws29) had a bad "left" monitor (started flickering yesterday)---Jonathan is replacing it now.
Attached is a roughly 19 hour trend, going back to relocking the PMC/ISS/FSS after yesterday's NPRO crystal temperature tuning. As can be seen, there were no excursions in PMC Refl above 18 W, and no sudden drops in the ISS diffracted power % or sudden ISS unlocks, a clear change from the last several days. We will continue to monitor this, but so far it looks like the new NPRO is much happier with this new crystal temperature.
Tony texted me at 10PM and warned me of an unstable ALSY lock. After resding the alogs and enacting the changes that him and Keita made to manage to lock Y, I was still having issues.
For context, the ALSY laser had maintenance early today and since then has been having either multimode locking issues or more general laser performance issues according to Daniel. To tackle this, Keita and Tony lowered the ALSY locking thresholds in order to make it past ALS and while this worked once, was not repeatable. Their alogs: alog 81116, alog 81114. After some flailing, I decided to call for help.
Daniel came online and began to dianose the ALSY issues. The first guess was that it was a multimode issue, to which he began to change the ALS frequency to test it out. The final conclusion is that it's some tuning error where ALSY doesn't operate stably in its current frequency since being adjusted to PSL freq. Daniel's Alog 81117. Final recommendation is that corrective maintenance needs to be done on-site at EY. Since this is not possible now, with Daniel's recommendation, IFO will remain in LOCKING_ARMS_GREEN until tomorrow's DAY shift.
The laser in EY looks unstable. It may be multimode.
The first attached plot shows the ALSY locking attampts over the past day. The laser was adjjusted to the new PSL frequency about 10 hours ago. Afterwards there is a distinct variation in the green output accompanied by a smaller inverse change of the red power. In the past 6 hours good ALS locks (red trace near +1.0) are correlated with the lower green power states. At higher green p[owers we seem to be locking a intermediate transmitted powers indicating the possibility of a multimode laser.
The second attached plot shows how changing the frequency of the laser changes the green and red powers. The green power varies much between -600 and +400MHz and seems to be stable outside this region.
We need to try adjusting the pump diode current and crystal temperature and see if we can resolve it this way.
Went to EY and adjusted the laser diode current and temperature. Hopefully, we are now in a more stable region of operation.
current as found: 1.600
doubler as found: 33.64
new current 1.511
new temp 30.00
new doubler 34.20
This resulted in a lower green power output. Correspondingly, the normalization of H1:ALS-Y_LASER_GR_DC, H1:ALS-C_TRY_A_LF, and H1:ALS-C_TRY_A_DC_POWER were lowered by a factor of 1.4.
The nominal laser diode power was also updated for H1:ALS-Y_LASER_HEAD.
The EY controller and/or laser is still somewhat flaky when trying to adjust the temperature. If this doesn't work, we may have to consider swapping in a spare.
TITLE: 11/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY:
ALS-Y is not correct and is very time consuming to trick into working correctly.
Please see alog https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=81114
Once locking , I took ISC_LOCK to Green arms manual, Keita lowered the Threshold even further: H1:ALS-Y_REFL_LOCK_TRANSPDNORMTHRESH -> 0.40 which is even lower than initial alignmnet. We did this because the Diff beatnote that we were seeing was -41 dbm
We also increased the gain H1:ALS-Y_REFL_LOCK_LOCKEDGAIN from 6 to 12.
Line 921 in ALS_ARM.py was changed:
if ezca['ALS-C_TR%s_A_LF_OUT16'%self.arm] < 0.5: # WAS Changed From 0.5 to 0.3
We then just waited and eventually it went to LOCKING_ALS !
I then reverted all my changes in ALS_ARM.py and save and reloaded ALS-Yarm as soon as we made it past Locking ALS.
Got all the way up to LOW_NOISE_ETMX[558] before some oscillation seen on ASC Chard P inmon unlocked it again.
The IFO is still very difficult to lock beyond ALS-Y, See the attachment for help finding these channles.
LOG:
Keita went out on the floor to work on the Lockloss Monitor.
Right after the ALSY laser was adjusted earlier today, IR power went down (right column, 2nd from the top) while the green power went up significantly (left column 2nd from the top) and started drifting significantly. There are times when green Y transmission cannot go more than 0.45 or so even when the arm was free swinging despite the higher green power. It doesn't seem as if the problem is not alignment.
Interestingly, last two times when Y arm locking was good (i.e. green transmission divided by the green laser power was reasonable) were when IR output of the laser peaked and the green output went down to the previous power level.
General Locking notes and other hacks:
ALS Y has been very difficult today, I spent over an hour trying to get a good alignment on Y arm and was never able to acheive better than 0.6. letting Increase flasshes increase the flashes only gets us to 0.6 aswell.
Reverted Y arm to gps time: 1414950519
I was then able to touch up Y-arm Slightly better, but the ALS-C_TRY_A_LF_OUT was still below 0.7 (the threshhold for locking the arm).
Keita then Lowered the Thresh hold to 0.45 which allowed us to lock Yarm in Green arms manual after touching up the Alignment by hand.
H1:ALS-Y_REFL_LOCK_TRANSPDNORMTHRESH -> 0.450
This allowed us to get through Initial Alignment.
The ALS-Y green power was changed today, and the Noise eater has Stopped eating as much noise.
Once locking , I took ISC_LOCK to Green arms manual, Keita lowered the Threshold even further: H1:ALS-Y_REFL_LOCK_TRANSPDNORMTHRESH -> 0.40 which is even lower than initial alignmnet. We did this because the Diff beatnote that we were seeing was -41
We also increased the gain H1:ALS-Y_REFL_LOCK_LOCKEDGAIN from 6 to 12.
Line 921 in ALS_ARM.py was changed:
if ezca['ALS-C_TR%s_A_LF_OUT16'%self.arm] < 0.5: # WAS Changed From 0.5 to 0.3
We then just waited and eventually it went to LOCKING_ALS !
I then reverted all my changes in ALS_ARM.py and save and reloaded ALS-Yarm.
We repeated these steps multiple times, and lost lock in Find IR due to instable Diff IR signal.
But on the 3rd attempt it worked!
TITLE: 11/07 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Corrective Maintenance
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 5mph Gusts, 4mph 3min avg
Primary useism: 0.09 μm/s
Secondary useism: 0.23 μm/s
QUICK SUMMARY:
Earthquake may have interupted the Initial alignment that was done just before my shift.
Reverted the Y arm to the OSEM values we had before our lockloss.
Reverting All but SQZ to last second of the last good Initial Alignment: 1414909471
Before: SUS-IM1_M1_OPTICALIGN_P_OFFSET: 427.4,
After: SUS-IM1_M1_OPTICALIGN_P_OFFSET: 427.4,
Before: SUS-IM1_M1_OPTICALIGN_Y_OFFSET: -384.8,
After: SUS-IM1_M1_OPTICALIGN_Y_OFFSET: -384.8,
Before: SUS-IM2_M1_OPTICALIGN_P_OFFSET: 706.7,
After: SUS-IM2_M1_OPTICALIGN_P_OFFSET: 706.7,
Before: SUS-IM2_M1_OPTICALIGN_Y_OFFSET: -178.3,
After: SUS-IM2_M1_OPTICALIGN_Y_OFFSET: -178.3,
Before: SUS-IM3_M1_OPTICALIGN_P_OFFSET: -232.8,
After: SUS-IM3_M1_OPTICALIGN_P_OFFSET: -232.8,
Before: SUS-IM3_M1_OPTICALIGN_Y_OFFSET: 333.5,
After: SUS-IM3_M1_OPTICALIGN_Y_OFFSET: 333.5,
Before: SUS-IM4_M1_OPTICALIGN_P_OFFSET: -85.2,
After: SUS-IM4_M1_OPTICALIGN_P_OFFSET: -87.0,
Before: SUS-IM4_M1_OPTICALIGN_Y_OFFSET: 392.2,
After: SUS-IM4_M1_OPTICALIGN_Y_OFFSET: 391.5,
Before: SUS-RM1_M1_OPTICALIGN_P_OFFSET: -729.8,
After: SUS-RM1_M1_OPTICALIGN_P_OFFSET: -723.6,
Before: SUS-RM1_M1_OPTICALIGN_Y_OFFSET: 180.9,
After: SUS-RM1_M1_OPTICALIGN_Y_OFFSET: 181.2,
Before: SUS-RM2_M1_OPTICALIGN_P_OFFSET: -48.0,
After: SUS-RM2_M1_OPTICALIGN_P_OFFSET: -32.6,
Before: SUS-RM2_M1_OPTICALIGN_Y_OFFSET: 318.3,
After: SUS-RM2_M1_OPTICALIGN_Y_OFFSET: 303.7,
Before: SUS-MC1_M1_OPTICALIGN_P_OFFSET: 833.3,
After: SUS-MC1_M1_OPTICALIGN_P_OFFSET: 833.3,
Before: SUS-MC1_M1_OPTICALIGN_Y_OFFSET: -2230.6,
After: SUS-MC1_M1_OPTICALIGN_Y_OFFSET: -2230.6,
Before: SUS-MC2_M1_OPTICALIGN_P_OFFSET: 591.5,
After: SUS-MC2_M1_OPTICALIGN_P_OFFSET: 591.5,
Before: SUS-MC2_M1_OPTICALIGN_Y_OFFSET: -580.4,
After: SUS-MC2_M1_OPTICALIGN_Y_OFFSET: -580.4,
Before: SUS-MC3_M1_OPTICALIGN_P_OFFSET: -20.3,
After: SUS-MC3_M1_OPTICALIGN_P_OFFSET: -20.3,
Before: SUS-MC3_M1_OPTICALIGN_Y_OFFSET: -2431.1,
After: SUS-MC3_M1_OPTICALIGN_Y_OFFSET: -2431.1,
Before: SUS-PRM_M1_OPTICALIGN_P_OFFSET: -1627.6,
After: SUS-PRM_M1_OPTICALIGN_P_OFFSET: -1636.9,
Before: SUS-PRM_M1_OPTICALIGN_Y_OFFSET: 622.9,
After: SUS-PRM_M1_OPTICALIGN_Y_OFFSET: 616.7,
Before: SUS-PR2_M1_OPTICALIGN_P_OFFSET: 1553.5,
After: SUS-PR2_M1_OPTICALIGN_P_OFFSET: 1549.5,
Before: SUS-PR2_M1_OPTICALIGN_Y_OFFSET: 2812.8,
After: SUS-PR2_M1_OPTICALIGN_Y_OFFSET: 2808.5,
Before: SUS-SRM_M1_OPTICALIGN_P_OFFSET: 2509.1,
After: SUS-SRM_M1_OPTICALIGN_P_OFFSET: 2519.6,
Before: SUS-SRM_M1_OPTICALIGN_Y_OFFSET: -3801.7,
After: SUS-SRM_M1_OPTICALIGN_Y_OFFSET: -3810.4,
Before: SUS-OM1_M1_OPTICALIGN_P_OFFSET: -73.9,
After: SUS-OM1_M1_OPTICALIGN_P_OFFSET: -74.7,
Before: SUS-OM1_M1_OPTICALIGN_Y_OFFSET: 686.7,
After: SUS-OM1_M1_OPTICALIGN_Y_OFFSET: 686.1,
Before: SUS-OM2_M1_OPTICALIGN_P_OFFSET: -1460.3,
After: SUS-OM2_M1_OPTICALIGN_P_OFFSET: -1448.3,
Before: SUS-OM2_M1_OPTICALIGN_Y_OFFSET: -254.3,
After: SUS-OM2_M1_OPTICALIGN_Y_OFFSET: -307.4,
Before: SUS-OM3_M1_OPTICALIGN_P_OFFSET: -1095.0,
After: SUS-OM3_M1_OPTICALIGN_P_OFFSET: -1095.0,
Before: SUS-OM3_M1_OPTICALIGN_Y_OFFSET: -212.0,
After: SUS-OM3_M1_OPTICALIGN_Y_OFFSET: -212.0,
Before: SUS-OMC_M1_OPTICALIGN_P_OFFSET: 36.9,
After: SUS-OMC_M1_OPTICALIGN_P_OFFSET: 36.9,
Before: SUS-OMC_M1_OPTICALIGN_Y_OFFSET: 0.0,
After: SUS-OMC_M1_OPTICALIGN_Y_OFFSET: 0.0,
Before: SUS-ITMX_M0_OPTICALIGN_P_OFFSET: -104.4,
After: SUS-ITMX_M0_OPTICALIGN_P_OFFSET: -101.1,
Before: SUS-ITMX_M0_OPTICALIGN_Y_OFFSET: 109.2,
After: SUS-ITMX_M0_OPTICALIGN_Y_OFFSET: 109.3,
Before: SUS-BS_M1_OPTICALIGN_P_OFFSET: 97.4,
After: SUS-BS_M1_OPTICALIGN_P_OFFSET: 97.6,
Before: SUS-BS_M1_OPTICALIGN_Y_OFFSET: -396.5,
After: SUS-BS_M1_OPTICALIGN_Y_OFFSET: -396.3,
Before: SUS-ITMY_M0_OPTICALIGN_P_OFFSET: -1.5,
After: SUS-ITMY_M0_OPTICALIGN_P_OFFSET: -4.5,
Before: SUS-ITMY_M0_OPTICALIGN_Y_OFFSET: -20.1,
After: SUS-ITMY_M0_OPTICALIGN_Y_OFFSET: -19.1,
Before: SUS-ETMX_M0_OPTICALIGN_P_OFFSET: -35.8,
After: SUS-ETMX_M0_OPTICALIGN_P_OFFSET: -37.1,
Before: SUS-ETMX_M0_OPTICALIGN_Y_OFFSET: -146.7,
After: SUS-ETMX_M0_OPTICALIGN_Y_OFFSET: -146.6,
Before: SUS-ETMY_M0_OPTICALIGN_P_OFFSET: 159.2,
After: SUS-ETMY_M0_OPTICALIGN_P_OFFSET: 157.5,
Before: SUS-ETMY_M0_OPTICALIGN_Y_OFFSET: -165.7,
After: SUS-ETMY_M0_OPTICALIGN_Y_OFFSET: -165.1,
Before: SUS-TMSX_M1_OPTICALIGN_P_OFFSET: -93.1,
After: SUS-TMSX_M1_OPTICALIGN_P_OFFSET: -96.7,
Before: SUS-TMSX_M1_OPTICALIGN_Y_OFFSET: -96.4,
After: SUS-TMSX_M1_OPTICALIGN_Y_OFFSET: -98.7,
Before: SUS-TMSY_M1_OPTICALIGN_P_OFFSET: 75.5,
After: SUS-TMSY_M1_OPTICALIGN_P_OFFSET: 73.8,
Before: SUS-TMSY_M1_OPTICALIGN_Y_OFFSET: -265.9,
After: SUS-TMSY_M1_OPTICALIGN_Y_OFFSET: -267.7,
ALS X looks worse and ALS Y doesnt seem to change much at all.
Ground motion is falling.
Reverted back to a better time: 1414970755
TITLE: 11/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Tony
SHIFT SUMMARY:
Since there were issues with locking last night/this morning, the main activity of the day was tuning the PSL NPRO crystal temperature again (this was done yesterday during maintnenance).
Attempted locking toward the end of the shift, but had some issues with ALSy---trans power has only been about 0.8 (earlier in the day we were at 0.9). It also goes into stretches of troubles during locking. Tony touched up the alignment and it was able to offload its alignment, but then there was a note about "waiting for PLL DIFF". Currently trying another alignment.
LOG:
V. Xu, J. Oberling
With the PSL NPRO still showing signs of mode hopping, after consulting with Sheila via TeamSpeak it was decided to stop locking attempts and go out to the PSL racks to tune the NPRO crystal temperature to see if we could find a mode hop free region. I wasn't too familiar with what we were looking for here but Vicky is, so she kindly agreed to lend a hand. We grabbed an oscilloscope from the EE shop, some Lemo cables and adapters, and out we went.
We wanted to scan the PMC through a complete FSR so we could watch the mode content while we changed the NPRO crystal temperature. To do this we unplugged the DC cables for PMC Trans and Refl from the PSL Monitoring Fieldbox and plugged them into the oscilloscope. We also plugged a cable into the 50:1 HV monitor so we could trigger off of the PMC PZT ramp. For the ramp we used the Alignment Ramp on the PMC MEDM screen, set to +/- 7V at 1 Hz. Once we got the signals on the scope and successfully triggered, we clearly had a full FSR visible so we kept these ramp settings. We watched things for a little bit before starting adjustments and noticed the laser frequency drift a little bit; Vicky estimated this at 1-3 GHz in 30 minutes.
We began at the crystal temperature where we left it yesterday (25.2 °C) and used the temperature control on the FSS MEDM to step the temperature down. At ~24.85 °C we started to see clear mode hopping behavior in the scan, which got worse at 24.75 °C, see first attachment. We had originally set the crystal temperature to 24.7 °C, so clearly we started in a mode hopping region. We continued moving the temperature down to see where the mode hop region ended; the scan didn't start looking good again until ~24.5 °C. We continued moving the temperature down until we maxed the slider. This had the crystal temperature at 23.95 °C and we were still mode hop free. So we started moving the temperature up to start mapping out the upper region (>25.2 °C). When started seeing the bad mode hopping region right above 24.5 °C, so this matches with where we saw things on the way down. However, it didn't clear up until the temperature was >25.1 °C, uncomfortably close to our starting value of 25.2 °C. It looks like the spot we locked the RefCav at yesterday was right on the edge of a mode hopping region, and we had been slowly drifting in and out of it. We continued to move the temperature up via the MEDM slider unil it maxed at a crystal temperature of 25.49 °C and saw no mode hopping behavior. The second attachment shows some pictures from both of these good regions.
The slider was maxed but we wanted to continue mapping the upper region, especially as this area is closer to the operating temperature of the NPRO we just removed from the PSL (meaning the SQZ and ALS lasers had been happily locked in this area for all of O4 to date). To do this we set the slider back to 0 (a crystal temperature of ~24.7 °C) and used the knob on the front panel of the NPRO power supply to adjust the crystal temperature. We moved slowly while watching the power out of the amplifiers and NPRO, in case the temperature adjustment caused beam changes that had a negative effect on amp output. In this way we moved the crystal temperature up to 26.75 °C and saw no mode hopping behavior. We did, however, start to see the power out of the amplifiers drop a little. At a crystal temperature of 26.75 °C we lost roughly 3 mW from the NPRO, ~0.5 W from Amp1, and ~1 W from Amp2. 3 mW from the NPRO results in about 0.25 W lost at Amp2, so most of the loss is likely due to mode changes in the NPRO beam caused by the different operating temperature (if this was alignment we would have lost power a good bit faster, mode matching changes I've observed are generally a good deal slower than alignment changes). Because of this we stopped here, and set the crystal temperature to ~26.27 °C. Still had ~139.0 W output from Amp2, so all is well here.
We then plugged everything back in and relocked the PMC; it locked without issue. We then adjusted the crystal temperature down via the MEDM slider to find a RefCav resonance. We did have one time where the PMC unlocked when the PZT ran out of range (the PMC PZT moves to keep the PMC locked while the laser frequency changes). We couldn't see a 00 coming through on the PMC ramp, so we moved the crystal temperature back up until a 00 was clearly visible flashing through; at this point the PMC locked without issue. Continuing to move the temperature down we found a RefCav resonance flash through at a crystal temperature of 26.26 °C, so we locked the RefCav. It took a couple of passes to grab the lock (had to disable the FSS Guardian to keep it from yanking the gains around), but it did lock. The final crystal temperature was 26.2 °C. To finish we locked the ISS, grabbed our equipment, and left the LVEA.
We left things here for an hour and saw no signs of mode hopping again, so Daniel and Vicky started tuning the ALS and SQZ lasers to match the new NPRO frequency. Time will tell whether or not we are in a better place in regard to NPRO mode hopping, but results so far are promising.
Daniel, Vicky - We adjusted the SQZ + ALS Y/X laser crystal temps again to match the new PSL laser frequency:
----------------------------------------------------------------------------------------------------------------------------------
Record of aux laser crystal temp changes following PSL swap:
Vicky, Camilla. Repeated what Vicky did yesterday 81079 to get the OPO temperature back (these are instructions for when the temperature starts very far away):
We then took SQZ_MANAGER to DOWN and FDS_READY_IFO. Got there with no issues, we'll want to re-optimize the temperature right before we go to observing.
Starting OPO temp: 31.25. Ending temp: 31.42, this is closer to the temperature we had straight after the NPRO swap.
Closes FAMIS#28454, last checked 80607
CO2 trends looking good (ndscope1)
HWS trends looking good (ndscope2)
You can see in the trends when the ITMY laser was swapped about 15-16 days ago.
Trend shows that the ITMY HWS code stopped running. I restarted it.
Erik, Camilla. We've been seeing that the code running on h1hwsmsr1 (ITMY) kept stopping after ~1hours with a "Fatal IO error 25" (Erik said related to a display) attached.
We checked that memory is fine of h1hwsmsr1. Erik troubleshooted this back to matplotlib trying to make a plot and failing as there was no display to make the plot on. State3.py calls get_wf_new_center() from hws_gradtools.py which calls get_extrema_from_gradients() which makes a contour plot, it's trying to make this plot and thinks there's a display but then can't plot it. This error isn't happening on h1hwsmsr (ITMX). I had ssh'ed into h1hwsmsr1 using -Y -C
options (allowing the stream image to show), but Erik found this was making the session think there was a display when there wasn't.
Fix: quitting tmux session, logging in without options (ssh controls@h1hwsmsr1), and starting code again. The code has now been running fine for the last 18 hours.
Closes FAMIS#26316, last checked 80936
Laser Status:
NPRO output power is 1.9W (nominal ~2W)
AMP1 output power is 68.62W (nominal ~70W)
AMP2 output power is 138.7W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 0 days, 0 hr 25 minutes
Reflected power = 17.47W
Transmitted power = 108.7W
PowerSum = 126.2W
FSS:
It has been locked for 0 days 0 hr and 25 min
TPD[V] = 0.8925V
ISS:
The diffracted power is around 3.6%
Last saturation event was 0 days 0 hours and 33 minutes ago
Possible Issues:
ISS diffracted power is high
Currently, Jason and Vicky are out on the floor for rack measurements and possible new laser frequency change (which will require changes for the ALS & SQZ lasers).
WP 12139
Entry for work done on 11/5/2024
Two 3T seismometers were installed in the LVEA Biergarten next to the PEM area. Signals are routed through the SUS-R3 PEM patch panel into the CER. Signals are connected to PEM AA chassis 4 and 5.
F. Clara, J. Warner
There are 2 of these plugged in, they are 3 axis seismometers, serial numbers T3611670 and T3611672. The first one is plugged into ports 4,5 & 6 on the PEM patch panel, the second is plugged into ports 7,8 & 9. In the CER, T3611670 is plugged into ports 21,22 & 23 on PEM ADC5 and T3611672 is plugged into ports 27,28 & 29 on PEM ADC4. In the DAQ, these channels are H1:PEM-CS_ADC_5_{20,21,22}_2K_OUT_DQ and H1:PEM-CS_ADC_4_{26,27,28}_2K_OUT_DQ. So far the seismometers look like they are giving pretty good data, similar to the STS and the old PEM Guralp in the biergarten. The seismometers are oriented so that the "north" marking on the their carry handles is pointed down the X-arm, as best as I could with eyeballing it.
I need to figure out the calibrations, but it looks like there is almost exactly -15db difference between these new sensors and the old PEM Guralp, but maybe the signal chain isn't exatly the same.
Attached images compare the 3T's to the ITMY STS and the existing PEM Guralp in the biergarten. First image compares asds for each seismometer. Shapes are pretty similar below 40 hz, but above that they all have very different responses. I don't know what the PEM guralp is calibrated to, if anything, it looks ~10x lower than the STS (which calibrated to nm/s). The 3T's are about 5x lower than the PEM sensor, so ~50x lower than the STS.
Second image are tfs for the X,Y & Z dofs between the 3T's and the STS. These are just passive tfs between the STS and 3T's to see if the have similar response to ground motion These are generally pretty flat between .1 and 10hz. The X & Y dofs seem pretty consistent, the Z tfs are different starting around 10hz. I should go and check that the feet are locked and have similar extension.
Third image are tfs between the 3T's and the exist PEM Guralp. Pretty similar to the tfs with the STS, horizontal dofs all look very similar, flat between .1 and 10hz, but the ADC4 sensor has a different vertical response.
I'll look at noise floors next.
The noise for these seems almost comparable to T240s, above 100 mhz, less certain about noise below 100mhz, these don't have thermal enclosures like the other ground sensors. Using mccs2 in matlab to remove all the coherent noise with the STS and PEM Guralp, the residual noise is pretty close to the T240 spec noise in SEI_sensor_noise. Attached plots are the asds and residuals after finding a scale factor that matches the 3T asds to the calibrated ITMY STS asds. Solid lines are the 3T asd, dashed lines are the residuals after coherent subtraction.
For convenience I've attachd the response of the T240 and the STS-2 from the manuals.
These instruments both have a steep fall-off above 50-60 Hz.
This is not compensated in the realtime filters, as it would just add lots of noise at high frequency, and then we'd have to roll it off again so it doesn't add lots of nasty noise.
T240 user guide - pg 45
https://dcc.ligo.org/LIGO-E1500379
The T240 response is pretty flat up to 10 Hz, has a peak at ~ 50 Hz, then falls off rapidly.
STS-2 manual - pg 7
https://dcc.ligo.org/LIGO-E2300142
Likewise the STS-2 response is pretty flat up to 10 Hz, then there is ripple, and a steep falloff above 60 Hz
I've roughly copied the LLO configuration for the AS power monitor (that won't saturate after lock losses) and installed an additional photodiode in the AS_AIR camera enclosure. PD output goes to H1:PEM-CS_ADC_5_19_2K_OUT_DQ for now.
GiGE used to receive ~40ppm of the power coming into HAM6. I replaced the steering mirror in front of GigE with 90:10, the camera now receives ~36ppm and the beam going to the photodiode is ~4ppm. But I installed ND1.0 in front of the PD, so the actual power on PD is ~0.4ppm.
See the attached cartoon (1st attachment) and the picture (2nd attachment).
Details:
This is the first look of the lock loss from 60W. At least the channel didn't saturate but we might need more headroom (it should rail at 32k counts).
The peak power is in this example is something like 670W. (I cannot figure out for now which AA filter and maybe decimation filter are in place for this channel, these things might be impacting the shape of the peak.)
Operators, please check if this channel rails after locklosses. If it does I have to change the ND filter.
Also, it would be nice if the lock loss tool automatically triggers a script to integrate the lock loss peak (which is yet to be written).
Tagging Opsinfo
Also checking out the channel during the last 3 high power locklosses this morning (NLN, OMC_WHITENING, and MOVE_SPOTS). For the NLN lockloss, it peaked at ~16.3k cts 80ms after the IMC lost lock. Dropping from OMC_WHITENING only saw ~11.5k cts 100ms after ASC lost it. Dropping from MOVE_SPOTS saw a much higher reading (at the railing value?) of ~33k cts also ~100 ms after ASC and IMC lost lock.
Camilla taking us down at 10W earlier this morning did not rail the new channel, it saw about ~21k cts.
As for the filtering associated with the 2k DAQ, PEM seems to have a standard ISC AA but the most impactful filter is a 8x decimation filter (16k -> 2k). Erik told me that the same 8x filter is implemented as src/include/drv/daqLib.c: line 187 (bi-quad form) line 280 (not bi-quad) and one is mathematically transformed to the other and vice versa.
In the attached, it takes ~1.3ms for the step response of the decimation filter to reach its first unity point, which is not really great but OK for what we're observing as the lock loss peaks seem to be ~10ms FWHM. For now I say that it's not unreasonable to use this channel as is.
I added ND0.6, which will buy us about a factor of 4.
(I'd have used ND2.0 instead of ND1.0 plus ND0.6, but it turns out that Thorlabs ND2.0 is more transparent at 1um relative to 532nm than ND1.0 and ND0.6 are. Looking at their data, ND2.0 seems to transmit ~4 or 5%. ND 1.0 and ND 0.6 are closer to nominal optical density at 1um.)
New calibration for H1:PEM-CS_ASC_5_19_2K_OUT_DQ using ASC-AS_C_NSUM_OUT (after ND was increased to 1.0+0.4) is ~0.708/4.00~0.177W/counts.
After Daniel got back, Corey locked the Green arms fine and started locking. We couldn't catch PRMI even after CHECK_MICH_FRINGES so I started an initial alignment at 17:13UTC. I had to go though DOWN INIT twice, as the first time the PMC input power wasn't reduced to 2W stayed at 10W, maybe as we went though CHECK_SDF).
After INITAL_ALIGNMENT, locking has been fully automated. We lost lock at ENGAGE_AS_FOR_FULL_IFO once and the second time got past this step and are currently at POWER_10W.
18:35UTC Back to NLN. Accepted some ALSY sdf diffs in OBSERVE.snap.