After failing to find DRMI and a lockloss at FIND_IR early on I ran an initial alignment, I was then able to reacquire NLN without any interventions at 08:44 UTC
In Observing at 09:02UTC once the ADS signals had converged for the camera servo to turn on, I let everything thermalize for about 2 hours, until we reached >430kW in the arms. I'm going to take us out of observing shortly to test/take a measurement of a new FF filter for MICH and SRCL then after that I'm going to run a calibration suite and hopefully then go back into observing for the rest of the morning.
Verbal reported a EQ from NZ (5.5) whose' R-waves will hit us in a half hour.
TITLE: 05/24 Owl Shift: 07:00-15:00 UTC (00:00-08:00 PST), all times posted in UTC
STATE of H1: Lock Acquisition
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 15mph Gusts, 12mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.12 μm/s
QUICK SUMMARY:
TITLE: 05/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
SHIFT SUMMARY:
Frustrating shift sandwiched with tough locking at early states of ISC_LOCK: Mainly LOCKING_ALS. Seemed to get better, but after MANY attempts and atleast an hour!
Had one lock with some observing time, but it was another shortlived lock (~2hrs).
LOG:
After the earlier lockloss, H1 is OBSERVING (first time since dolphin crash).
Sadly, locking was not easy in that there were issues around the LOCKING ALS/FIND IR steps---not clear why this was the case, but eventually made it past those steps. DRMI did not look good, so opted for PRMI which also did not look good and ultimately selected CHECK MICH FRINGES, and slowly got through PRMI & then mostly no issues getting to NLN (ENGAGE ASC for FULL IFO did take a bit of time to wait for ADS' YAW3 to converge). After all of this, wonder if running an Initial Alignment would have been better.
While in NLN, Vicky took a handful of minutes to look at squeezer while I was going through SDF.
ACCEPTED all the diffs for the attached nodes (tagging ISC & SUS).
NOTE: After thermalization, can (1) enable mich/srcl feedforward & (2) run calibration suite.
ENDY Station Measurement
During the Tuesday maintenace, the PCAL team(Tony S. & Dripta B) went to EndY with WSH (PS4) and took an End station measurement.
The ENDY Station Measurement was carried out according to the procedure outlined in Document LIGO-T1500062-v15, Pcal End Station Power Sensor Responsivity Ratio Measurements: Procedures and Log, and was completed without any unexpected hiccups by 11:30am. LIGO-T1500062-v15 is attached.
First thing I do is take a picture of the beam spot before anything is touched!
Martel:
We started by setting up a Martel Voltage source to apply some voltage into the PCAL Chassis's Input 1 channel and we record the times that a -4.000V, -2.000V and a 0.000V signal was sent to the Chassis. The analysis code that we run after we return uses the GPS times, grabs the data and created the Martel_Voltage_Test.png graph. We also did a measurement of the Martel's voltages in the PCAL lab to calculate the ADC conversion factor, which is included on the document.
After the Martel, measurement the procedure walks us through the steps required to make a series of plots while the Working Standard is in the Transmitter Module. These plots are shown in WS_at_TX.png.
Next steps include: WS in the Receiver Module, These plots are shown in WS_at_RX.png.
Followed by TX_RX.png which are plots of the Tranmitter module and the receiver module operation without the WS in the beam path at all.
The last picture is of the Beam spot after we had finished the measurement.
All of this data is then used to generate LHO_ENDY_PD_ReportV2.pdf which is attached, and a work in progress in the form of a living document.
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LHO_ENDY/
PCAL Lab Responsivity Ratio Measurement:
A WSH/GSHL (PS4/PS5) Responsivity Ratio measurement was ran, analyzed, and pushed to the SVN.
The analysis of this measurement produces 4 PDF files which we use to vet the data for problems. We
raw_voltages.pdf
avg_voltages.pdf
raw_ratios.pdf
avg_ratios.pdf
All data and Analysis has been commited to the SVN.
https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Projects/PhotonCalibrator/measurements/LabData/PS4_PS5/
This adventure has been brought to you by Dr. Dripta B. & and Tony S.
More analysis to come.
Jennie W, Sheila D, Jeff K
In order to get an understanding of output losses to understand why we are not getting the level of squeezing we expect I ran craig's scripts to step the DARM offset. Instead of measuring PCAL line heights on DCPD_SUM at each step we will look at these line heights on OMC REFL.
As this PD is much noisier than DCPD I had to do some tuning of the PCAL lines we use. Since we have limited measurement time and the IFO was still at MAX POWER, Sheila suggested we spend some time tuning the line heights to use in the measurement.
By this point we were waiting for violin modes to damp while in the OMC_WHITENING STATE.
original PCAL EY and EX values are first image.
I ran set_up_pcal_for_darm_offset_test.py. Originally this uses:
PCALX 255Hz 40000 counts
PCALY 410.3Hz 40000 counts
but this did not give us the correct resolution on OMC REFL for the higher frequency of the 2. Neither OFS PD was saturated.
See Ref0 (4th image attached) for the OMC REFL spectrum with these settings.
I manually stepped up the 410.3Hz line in ampltiude (on PCALY PCAL_END screen to 48 000 counts at which point the OFS PD saturated. Following Jeff's instructions I turned off the loop_enable switch on the PCAL_END screen for PCALY, changed the amplitude down to 46000 and swtiched the loop back on. OFS no longer saturated.
see Ref1 (5th image attached) for new OMC REFL spectrum. This still does not give a good SNR.
I then switched off the OSC_SUM_MATRIX (ie. the output) and changed this line to 11.5Hz at an amplitude of 40000 counts. This does not saturate the OFS PD.
See Ref 2 (6th image attached) for the OMC REFL spectrum.
The IFO moved to NLN and this did not change the PCAL settings I had on.
The third image attached shows the PCALEX and EY settings we used for the DARM offset measurement.
I then ran DARMOffsetStep.py starting at GPS 1368922456 to step the DARM offset.
Jeff reset the PCALEY and EX values using sdf.
xml from tuning measurements is in /ligo/home/jennifer.wright/git/OMC_mode_matching/2023-05-23_DARM_offset_meas.xml
Code is in /ligo/gitcommon/labutils/darm_offset_step
DARM offset results are in data/darm_offset_steps_2023_May_24_00_13_58_UTC.txt
Stay tuned for processed measurements.
Jennie W., Sheila D.
After using an adapted version of Craig and Dan's code to do the analysis I have plotted optical gain (as measured at the OMC REFL PD) while DARM offset is stepped vs. power on the OMC DCPDs, shown in first plot.
The optical gain is determined from the line height of two PCAL lines as measured on the OMC REFL PD, 11.5Hz and 255Hz, unfortunately we were trying to get the measurement done quickly and so did not notch the lower frequency one out of the DARM fb loop.
This plot can be thought of as light rejected by the OMC vs. light passing through the OMC.
We did not get the parabola we expected from this (ignore the fit line).
The second plot is the same optical gain against DARM offset in counts.
Looking at the ndscope of the time series during this measurement (first screenshot), it can be seen that there is a marked upward trend on the PD, presumably due to the fact we started this measurement during OMC whitening so we had not been locked for long and the light on the PD increases with thermalisation in the arms.
The second screenshot shows the noise spectra after the measurement.
Plots are in /ligo/gitcommon/labutils/darm_offset_step/figures/plot_OMC_REFL_vs_dcpd_sum/
data is in /ligo/gitcommon/labutils/darm_offset_step/data/darm_offset_steps_2023_May_24_00_13_58_UTC.txt & darm_offset_steps_2023_May_24_00_13_58_UTC.pkl,
code is in /ligo/gitcommon/labutils/darm_offset_step/plot_OMC_REFL_vs_dcpd_sum.py,
Screenshots are in jennie.wright/git/OMC_mode_matching.
Summary: We can't get anything useful out of this measurement without a quieter PD. Maybe we can get the same info a different method. Also why is the OMC REFL PD so noisy?
J. Kissel, C. Gray That lock loss from NOMINAL_LOW_NOISE at 2023-05-24 00:33 UTC was me. I was running through the SDF system looking to accept or revert changes to get ready for observing, saw that there were filter differences and tramp differences in the in-loop MICH and SRCL FF in the LSC model, and we thought were Corey was starting the FF test discussed in LHO:69862 a little too early. As soon as I hit "revert" we lost lock. I don't have a screenshot of the DIFFs, but Corey thinks that these filters DIFFs were a result of him accepting the "newer" "5-20-23" filters provided last night just to get us to obseve (see brief mention of the acceptance in LHO:69801, and some discussion of the filters in LHO:69785). We think these showed up as a DIFF tonight because the ISC_LOCK guardian had reset it to the "normal" filters "4-15-23". Or something. But clearly switching these filters at the speed of a button click does not make the IFO happy.
Silver lining -- we've recovered NOMINAL_LOW_NOISE at 2023-05-24 00:18 UTC after a long 24 hours of dolphin crashes...
TITLE: 05/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 11mph Gusts, 6mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.14 μm/s
QUICK SUMMARY:
H1 is on its way up to NLN for first time post-Maintenance (and dolphin crash last night)---currently going to MAX power.
Have a few items on the docket once we are at NLN & later thermalized:
In other news, nuc30 has been doing better post-nuc26 swap.
[Dave, Patrick, Erik]
The unifi wireless access point (WAP) medm is updated to include FCES, MX and MY access points for the CDS network. These can now be disabled and enabled from the MEDM.
Eac WAP status has been added to the SDF, so an SDF difference will be indicated when a WAP is turned on.
Sheila, TJ, Betsy, Keita, Jeff
Over the last 9 days we have moved the PR3 yaw slider by 2 urad (from 151.6 to 149.3) to maintain the ALS beatnotes on ISCT1, to allow locking. Today we had planned to move PR3 back and realign on the table, but when I set PR3 back TJ saw that the beams were clipped arriving on the table, and not reaching the trans PDs well enough to trigger and lock the arms in green. The beam was clipping on the prism, so fixing this on the table would have meant moving the periscope mirrors. We decided to first double check that the green camera for the X arm had not drifted.
Instead of realigning on the table, we decided to check the camera reference. TJ skipped shuttering ALS as we relocked, let ADS converge at 2W, and Keita and I ran the green offset servo (reminder, green reference instructions are here). The attached screenshot shows the QPD offsets and camera offsets that we would have gotten. These are not large changes to the references, so it doesn't seem that the problem is something like a drift of the camera. We reverted these changes to the green references.
A screenshot of the camera is attached, which does show the beam is not well centered on this camera.
We are returning to locking now, but since this has been causing locking troubles and seems to be getting worse over time, we will plan to return to re-align the table next time we are relockign during the day.
In the end the PR3 alignments was a temporary fix for the true problem. The true problem was that the HAM-ISIs were all drifting off in Yaw (RZ) slowly, but surely, for weeks -- see LHO:69934.
TITLE: 05/23 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Preventive Maintenance
SHIFT SUMMARY: We still have not fully recovered since the dolphin crash yestereday. After maintenance we took some time to investigate the PR3 movement that has been a mystery over the last week or two. Initial alignment and locking was reletively straight forward. The OMC locking problems that Ryan C ran into this morning we didn't see during our one time locking the OMC today. We recently made it to 25W and lost lock at MOVE_SPOTS, maybe because ALS was still unshuttered. Relocking again now.
LOG:
| Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
|---|---|---|---|---|---|---|
| 14:53 | FAC | Contractors | LVEA | n | Vinyl Work | 18:50 |
| 15:02 | FAC | Randy & Mitch | EndY | n | Wind fence work | 17:21 |
| 15:08 | VAC | Gerardo & Jordan | CS | n | Purge Air work | 16:41 |
| 15:11 | FAC | Kim | EY | n | Tech clean | 17:04 |
| 15:15 | PCAL | Tony, Dripta | EY | YES | PCAL meas. | 18:05 |
| 15:22 | CDS/SEI | Fil, Dave, Jim | CER | n | IOPSEIH16 ADC swap | 16:47 |
| 15:24 | FAC | Tyler | CS | n | Moving road barricades | 16:24 |
| 15:25 | FAC | Bubba, contractor | EY | n | Pick up lift with truck | 17:21 |
| 15:33 | FAC | Christina | CS | n | Moving items to recycling and to receiving | 17:23 |
| 15:42 | FAC | Chris | Outbuildings | n | Replace HVAC filters | 18:05 |
| 15:45 | FAC | APS | High bay | n | High bay access system | 19:23 |
| 15:52 | SEI/CDS | Fil | EY | n | Cable pulls for HEPI upgrade | 18:51 |
| 15:56 | FAC | Kim, Cindi | FCES | n | Tech clean | 17:06 |
| 15:56 | FAC | Kim | EX | n | Tech clean | 16:26 |
| 16:06 | FAC | Richard | LVEA, FCES | n | Check on APS | 16:06 |
| 16:11 | CDS | Marc, Dave, Fernando | CER | n | h1omc0 chassis power supply work | 17:37 |
| 16:15 | PSL | Jason | LVEA | n | Check PSL racks OK for run | 16:46 |
| 16:34 | - | Betsy | LVEA | n | Pre sweep | 17:40 |
| 16:37 | VAC | Travis, Janos | EX, EY, mids | n | HEPTA surveys | 18:25 |
| 16:42 | VAC | Gerardo, Jordan | EY - Mech room | n | Purge air | 17:38 |
| 16:45 | FAC | Bubba | LVEA | n | Looking for regulators | 17:45 |
| 16:47 | CDS | Dave | remote | n | Reboot CW machine | 17:23 |
| 17:05 | SEI | Jim | EY,EX | n | Lube HEPI fans | 17:05 |
| 17:06 | FAC | Kim, Cindi | LVEA | n | Tech clean | 18:51 |
| 17:25 | CDS | Dave | remote | n | Restart cameras | 17:40 |
| 17:38 | CDS | Marc, Feranando | CER | n | ioplsc0, iopasc0, power supply work ISC-C1/C2 racks | 18:10 |
| 17:41 | CDS | Richard | LVEA | n | Verify circuit | 17:54 |
| 17:46 | SEI | Jim | CR | n | Testing HAM3 FF filter | 18:45 |
| 17:54 | VAC | Gerardo, Jordan | CS | n | Purge air work | 19:22 |
| 18:04 | CDS | Erik | CR | n | cdsrfm dolphin work | 18:34 |
| 18:25 | - | Betsy, Travis | EX | n | Sweep | 18:54 |
| 18:55 | CDS | Fil. Marc | EX | n | Cleanup | 19:25 |
| 19:05 | - | Betsy, Travis | EY | n | Sweep | 19:31 |
| 19:09 | PEM | Adrian | EY | n | Cleanup | 19:35 |
| 19:36 | PEM | Adrian | EX | n | Cleanup | 19:51 |
| 20:38 | PCAL | Tony, Dripta | PCAL lab | local | Pcal meas | ongoing |
| 21:21 | ISC/ALS | Sheila, TJ, Betsy | LVEA - ISCT1 | local | Realign beatnotes | 21:24 |
| 21:22 | VAC | Gerardo, Jordan | LVEA | n | FTIR sample | 22:06 |
| 21:27 | VAC | Janos | MY | n | HETPA checks | 22:06 |
As noted earlier (see this aLog entry) the Pcal X/Y comparison was giving nonsensical values.
I found that the demodulation oscillator freqency for the Pcal Xarm (H1:CAL-CS_TDEP_PCAL_X_COMPARE_COMPARISON_OSC_FREQ) had not been updated on May 5 when we changed line frequencies.
It has been updated to 283.91 now.
The change has been accepted and the value is now monitored in SDF.
The Yarm frequency is now monitored as well.
I've accepted what I believe are these changes in the PCALY OBSERVE.snap (we confirmed that there were no DIFFs in these channels in the safe.snap).
Jeff B, DriptaB, RickS
After being alerted to this issue by JeffK, we restored the Pcal Yend EPICS variable values using the CAPUT commands from this file in the SVN:
/ligo/svncommon/CalSVN/aligocalibration/trunk/Projects/PhotonCalibrator/O4_EPICS_calibration/Pcal_LHO_CAPUTfile_O4run_2023-05-11.txt
caput H1:CAL-PCALY_FORCE_COEFF_RHO_T 7157.80
caput H1:CAL-PCALY_FORCE_COEFF_RHO_R 10652.5
caput H1:CAL-PCALY_FORCE_COEFF_TX_PD_ADC_BG 17.5161
caput H1:CAL-PCALY_FORCE_COEFF_RX_PD_ADC_BG -0.8150
caput H1:CAL-PCALY_FORCE_COEFF_TX_OPT_EFF_CORR 0.9920
caput H1:CAL-PCALY_FORCE_COEFF_RX_OPT_EFF_CORR 0.9931
caput H1:CAL-PCALY_XY_COMPARE_CORR_FACT 1.0015
The values were accepted in the SDF, for both the observe.snap and safe.snap files.
Looks like the 3IFO large Container #3 needs to be checked for any purge line issues. Although I'm not sure of the unit, this readout does not look like the others, at "160 H2O ppm" while the others are ~10-50 ish. A trend shows this on a slight rise.
While inspecting, Randy and I found that the guage ball on container unit #2 looked stuck (reading very low on the screen also) so he dialed down the pressure, and tapped it to unstick it, before resetting the pressure. The trend of this container shows pretty flat line and hashy for the last 100 days and now it is back to trending something. Hopefully it is showing flow now.
The #3 unit is still "higher" than the other when glancing at the medm values, but a longer year trend shows it goes up and down. Likely all of these channels need some recalibration or something since the units and signs are not obvious.
Betsy, Jason, Sheila, Fil, Marc, Travis, Adrian
This morning, a few of us made some walk-thrus of the Corner and End station VEAs to check and turn off items utilized during non-Observing run times, per T1500386 (bold were items which needed attention this time around). All of the following were checked off.
• Make sure no one is in the LVEA
• Cranes in their "parking spots" & their lights are OFF
• Monitors/work stations are turned OFF (except VAC computers) - Powered down ITMX camera setup computer
• Phones unplugged (wall-wars & RJ11 plugs) & batteries pulled from handsets (Phone locations here)
• Confirm no mechanical shorts onto HEPI.
• Cleanrooms OFF
• PSL in Science Mode - bit of an audible hum in the LVEA after everything turned off, kinda of seemed like it was more in the South bay, maybe fans or HVAC still need to be checke din this area
• ISC Table fans OFF
• Confirm wifi access points are unplugged (instructions)
• Electronics racks (i.e. make sure no test equipment connected to a rack, unless work permit for it.) - A few unconnected cables hanging in the PSL, ISC, SQZ racks, but all determined to be not an issue (some used for temp needs). Added termination plugs to unused RF plugs in SQZ racks, and 1 in the PSL rack. Lots of PEM BNC cables still run to various areas from PEM racks. O-scope connected and powered on near West bay corner for PEM coil use. Adrian/Robert confirm all PEM is in a nominal run configuration. Will spend another Tuesday with folks to finish cleaning up and stowing cables.
Temp dust mon at HAM2 unplugged.
End stations - HWS camera power supplies under IIET upgrade WIP, so temp plug-in to wall power.
EX weather station equipment (and PS) in rack on VEA floor removed by Fil.
HAM6 RGA ion pump controller sitting on a cart at the end of HAM6 chamber will be left on, but RGA/fan was turned off.
• Forklift NOT connected to charger
• Unplug unused power supplies/extension cords - Unplugged some
• Lights OFF (for end stations check lights via webcams)
• Unplug power supplies for Valcom Paging System and 48V DC H1PSL Phone in Communications Room 163. Also, here is a mouse in this room 163. The animal kind.
• ALOG the LVEA has been swept.
LVEA not as silent as we remember -
After all of the sweeping to unplug items, etc, the LVEA was not as quiet as many of us recall during O3. There is a quietish high pitched hum (like a fan) somewhere, but after Jason, TJ, and Gerardo and I listened for a bit, we couldn't specifically tell where it was coming from. It isn't a fan from an ISC table, nor is it the equipment in the PSL area emergency egress closet thing. Maaaybe it's the SQZ racks between HAM4 and 5, but you can also hear it when walking from there to the PSL. The SQZ racks are all new this time around however. Or, Gerardo suggests checking to see if it is the dust monitor pump in the mech room which is a bit loud. There is a temp power supply under the HAM4 HWS table but it has a slightly different quieter hum.
Richard also reminded us that the LVEA VAC rack along the Y-manifold area now has the back door removed and may be noisier than before. Will look into this next Tues.
We never got a picture of the gustmeter instrumentation when I set them up and they came up in the pre-O4 sweep. We are leaving the EY gustmeter setups in place, plugged in and taking data near the emergency door in the EY VEA. A picture of the current setup is attached. The gustmeter on ADC channel 12 has failed at some point.
The FCES was swept this morning before the run start. All well there with the above items looked at.
A noise source was identified in alog 69927, namely a loud dust mon pump.