PCAL team went to End Y today with PS4 to do a regular measurement and a "long measurement consisting of 15 minutes of time in each position instead of 240 seconds".
PS4 rho, kappa, u_rel on 2024-10-25 corrected to ES temperature 299.3 K : -4.71053733727373 -0.0002694340454223 4.653616030093759e-05
Copying the scripts into tD directory...
Connected to nds.ligo-wa.caltech.edu
martel run
reading data at start_time: 1417885234
reading data at start_time: 1417885750
reading data at start_time: 1417886151
reading data at start_time: 1417886600
reading data at start_time: 1417886970
reading data at start_time: 1417887305
reading data at start_time: 1417887420
reading data at start_time: 1417888020
reading data at start_time: 1417888356
Ratios: -0.5346804302935332 -0.543306389094602
writing nds2 data to files
finishing writing
Background Values:
bg1 = 18.604505; Background of TX when WS is at TX
bg2 = 5.391990; Background of WS when WS is at TX
bg3 = 18.556794; Background of TX when WS is at RX
bg4 = 5.396890; Background of WS when WS is at RX
bg5 = 18.642247; Background of TX
bg6 = -0.202112; Background of RX
The uncertainty reported below are Relative Standard Deviation in percent
Intermediate Ratios RatioWS_TX_it = -0.534680;
RatioWS_TX_ot = -0.543306;
RatioWS_TX_ir = -0.527163;
RatioWS_TX_or = -0.534899;
RatioWS_TX_it_unc = 0.055923;
RatioWS_TX_ot_unc = 0.051445;
RatioWS_TX_ir_unc = 0.062749;
RatioWS_TX_or_unc = 0.054710;
Optical Efficiency
OE_Inner_beam = 0.986010;
OE_Outer_beam = 0.984479;
Weighted_Optical_Efficiency = 0.985245;
OE_Inner_beam_unc = 0.044504;
OE_Outer_beam_unc = 0.041112;
Weighted_Optical_Efficiency_unc = 0.060587;
Martel Voltage fit:
Gradient = 1637.914766;
Intercept = 0.150812;
Power Imbalance = 0.984123;
Endstation Power sensors to WS ratios::
Ratio_WS_TX = -0.927655;
Ratio_WS_RX = -1.384163;
Ratio_WS_TX_unc = 0.044122;
Ratio_WS_RX_unc = 0.042178;
=============================================================
============= Values for Force Coefficients =================
=============================================================
Key Pcal Values : GS = -5.135100; Gold Standard Value in (V/W)
WS = -4.710537; Working Standard Value
costheta = 0.988362; Angle of incidence
c = 299792458.000000; Speed of Light
End Station Values : /ligo/gitcommon/Calibration/pcal
TXWS = -0.927655; Tx to WS Rel responsivity (V/V)
sigma_TXWS = 0.000409; Uncertainity of Tx to WS Rel responsivity (V/V)
RXWS = -1.384163; Rx to WS Rel responsivity (V/V)
sigma_RXWS = 0.000584; Uncertainity of Rx to WS Rel responsivity (V/V)
e = 0.985245; Optical Efficiency sigma_e = 0.000597; Uncertainity in Optical Efficiency
Martel Voltage fit :
Martel_gradient = 1637.914766;
Martel to output channel (C/V)
Martel_intercept = 0.150812;
Intercept of fit of Martel to output (C/V)
Power Loss Apportion : beta = 0.998844; Ratio between input and output (Beta)
E_T = 0.992021; TX Optical efficiency
sigma_E_T = 0.000301; Uncertainity in TX Optical efficiency
E_R = 0.993169; RX Optical Efficiency
sigma_E_R = 0.000301; Uncertainity in RX Optical efficiency
Force Coefficients :
FC_TxPD = 9.138978e-13; TxPD Force Coefficient
FC_RxPD = 6.216600e-13; RxPD Force Coefficient
sigma_FC_TxPD = 4.923605e-16; TxPD Force Coefficient
sigma_FC_RxPD = 3.250921e-16; RxPD Force Coefficient
data written to ../../measurements/LHO_EndY/tD20241210/
Before beam spot looking a little oblonged but not too bad.
Martel Voltage Test plots
WS_at_RX plots
WS at RX Side with Both Beams
WS at Transmitter Module
PCAL ES procedure & Log DCC T1500062 ( Modified for long measurement)
The analysis for the long measurement is still pending.
This adventure was brought to you by Dripta & Tony S.
I forgot to link to the trends doc:
https://git.ligo.org/Calibration/pcal/-/blob/master/O4/ES/measurements/LHO_EndY/tD20241210/LHO_EndY_PD_ReportV4.pdf?ref_type=heads
WP12239 New VM server
Erik, Jonathan:
The old proxmox machine was taken out of service and repaired. In production it was replaced with a W2275 (aka oaf FE). Please see Jonathan's alog for details. After h0epics was moved, its iocs were started by hand as per wiki instructions:
dust monitors (lvea, lab, ey, ex, dr), weather stations (ex, ey, mx, my) and 3ifo dewpoint sensor.
WP12236 New RGC and Timing Card h1omc0
Jonathan, Erik, EJ, Marc, Daniel, Dave:
h1omc0 was upgraded to a custom RCG5.30 and a new LIGO Timing Card (ver1589). This will permit changing the Duotone frequency from 960,961Hz to 1920,1921Hz in conjection with LLO next week. Erik verified the duotone is currently unchanged at 960,961Hz.
Please see Erik's alog for details.
The RCG upgrade of h1iopomc0 added one slow channel to the DAQ INI (H1:FEC-179_TIMING_CARD_TEMP_DEG_C). A DAQ restart was required, which permitted an EDC change for several pending WPs (marked as *).
WP12243 Slow Controls Timing Card Version Check
Daniel
Daniel changed the slow-controls Beckhoff code to permit the following versions of the LIGO timing card in the FE IO-Chassis
Ver | Desc |
496 | Standard first version timing card |
1000 | New LIGO DAC timing card |
1589 | Variable DuoTone frequency timing card |
The code was restarted at 10:19. This fixed the persistant EX timing error due to h1susex TCver=1000 and preempted a corner station h1omc0 error with TCver=1589.
WP12164 Add new Guardian PCALX_STAT channels to DAQ*
Tony, TJ, Dave:
The new PCALX_STAT guardian node's DAQ channels were added to the EDC as part of the DAQ restart.
WP12195 Add missing CDSRFM slow channel to DAQ*
Dave:
A new H1EPICS_CDSRFM.ini was loaded, adding the missing channel H1:CDS-RFM_LRS_EX2CS_CHCNT
WP12185 add VACSTAT slow channels to DAQ*
Dave:
A new H1EPICS_VACSTAT.ini added the new state channels to the DAQ.
CDS Hardware Status IOC
Erik, Dave:
Following the upgrade of h1omc0's timing card the CDS HW stat reported problems with this card. We found that one unused bit in the LPTC_STATUS PV was unset in h1omc0 and set in all the other front ends, including h1susex.
All FE except h1omc0 | LPTC_STATUS = 0x4fbf ee00 (bit 16 set) |
h1omc0 | LPTC_STATUS = 0x4fbe ee00 (bit 16 unset) |
Now that Daniel has fixed the TCver error in EX, the temporary code in cds_status_ioc.py to verify the only EX error is the TCver error was removed.
DAQ Restart
Jonathan, Erik, EJ, Dave:
The DAQ was restarted primarily for the h1iopomc0 INI change due to its RCG upgrade. Several EDC changes which have been waiting for target-of-opportunity were also done.
This was a messy DAQ restart. Adding h1omc0 to the custom RCG boot server h1vmboot0 inadvertently changed the DAQ FE list to only include two frontends; h1susex and h1omc0. When the 0-leg was started initially DC0 looked good, but then we noticed that on the CDS overview only the DC0 EPICS channels for omc0 and susex were connecting, and Jonathan found GDS0 had lost a lot of channels. Around this time I also restarted the EDC on h1susauxb123 and also found that the expected number of channels was down by about 1000. At this time FW0 was writing tiny frames (only two frontends).
DC0s DAQ configuration was fixed and the 0-leg was restarted. At this point the EDC was out of sync, so it was restarted. This then generated a new H1EDC.ini which necessitated a third and final 0-leg restart (plus another GDS0).
The 1-leg restart was a single restart, with a GDS1 restart also needed.
Change of RCG reporting color on CDS Overview
Dave:
We now have two front end systems in production which are running with a custom version of the RCG. To highlight this, I have changed the non-standard-RCG indicator color from purple to black on the CDS Overview. Please see attached.
Channels added to DAQ:
h1iopomc0:
< H1:FEC-179_TIMING_CARD_TEMP_DEG_C 4 16
h1edc (+26 chans, 57429 to 57455)
< H1:CDS-RFM_LRS_EX2CS_CHCNT 4 16
< H1:CDS-VAC_STAT_STATE 4 16
< H1:CDS-VAC_STAT_VERSION 4 16
< H1:GRD-PCALX_STAT_ACTIVE 4 16
< H1:GRD-PCALX_STAT_ARCHIVE_ID 4 16
< H1:GRD-PCALX_STAT_CONNECT 4 16
< H1:GRD-PCALX_STAT_ERROR 4 16
< H1:GRD-PCALX_STAT_EXECTIME 4 16
< H1:GRD-PCALX_STAT_INTENT 4 16
< H1:GRD-PCALX_STAT_LOAD_STATUS 4 16
< H1:GRD-PCALX_STAT_MODE 4 16
< H1:GRD-PCALX_STAT_NOMINAL_N 4 16
< H1:GRD-PCALX_STAT_NOTIFICATION 4 16
< H1:GRD-PCALX_STAT_OK 4 16
< H1:GRD-PCALX_STAT_OP 4 16
< H1:GRD-PCALX_STAT_PV_TOTAL 4 16
< H1:GRD-PCALX_STAT_READY 4 16
< H1:GRD-PCALX_STAT_REQUEST_N 4 16
< H1:GRD-PCALX_STAT_SPM_CHANGED 4 16
< H1:GRD-PCALX_STAT_SPM_MONITOR 4 16
< H1:GRD-PCALX_STAT_SPM_TOTAL 4 16
< H1:GRD-PCALX_STAT_STALLED 4 16
< H1:GRD-PCALX_STAT_STATE_N 4 16
< H1:GRD-PCALX_STAT_STATUS 4 16
< H1:GRD-PCALX_STAT_TARGET_N 4 16
< H1:GRD-PCALX_STAT_VERSION 4 16
RESTARTS
Tue10Dec2024
LOC TIME HOSTNAME MODEL/REBOOT
13:19:44 h1omc0 h1iopomc0 <<< reboot h1omc0
13:19:57 h1omc0 h1omc
13:20:10 h1omc0 h1omcpi
13:35:19 h1daqdc0 [DAQ] <<< First 0-leg restart with reduced DAQ config
13:35:33 h1daqfw0 [DAQ]
13:35:33 h1daqtw0 [DAQ]
13:35:34 h1daqnds0 [DAQ]
13:35:43 h1daqgds0 [DAQ]
13:36:49 h1susauxb123 h1edc[DAQ] <<< First EDC restart, also reduced config
13:42:33 h1daqdc0 [DAQ] <<< Second 0-leg restart, full FE config but incorrect EDC
13:42:38 h1daqfw0 [DAQ]
13:42:39 h1daqnds0 [DAQ]
13:42:39 h1daqtw0 [DAQ]
13:42:47 h1daqgds0 [DAQ]
13:44:25 h1susauxb123 h1edc[DAQ] <<< Second EDC restart, now has full config but disagrees with DC0
13:45:24 h1daqdc0 [DAQ] <<< Third 0-leg restart to sync up with EDC
13:45:35 h1daqfw0 [DAQ]
13:45:35 h1daqtw0 [DAQ]
13:45:36 h1daqnds0 [DAQ]
13:45:43 h1daqgds0 [DAQ]
13:46:23 h1daqgds0 [DAQ] <<< second GDS0 restart needed
13:51:20 h1daqdc1 [DAQ] <<< 1-leg restart, all configs good.
13:51:32 h1daqfw1 [DAQ]
13:51:33 h1daqtw1 [DAQ]
13:51:34 h1daqnds1 [DAQ]
13:51:41 h1daqgds1 [DAQ]
13:52:14 h1daqgds1 [DAQ] <<< second GDS1 restart needed
Note that the last DAQ restart was on 15th October 2024, it had been running 56 days 4 hours with no CRC or retransmissions.
[Erik v.R., Dave, Jonathan]
h1omc0 was upgraded to support alternative duotone frequencies.
A new parameter was added to the iop model, h1iopomc0, 'duotone_frequency', currently set to '960', which configures the duotone frequency to 960/961 Hz, the standard frequencies.
The h1omc0 front end was moved from a the primary bootserver, h1vmboot1, to the secondary bootserver, h1vmboot0 using changes to the puppet configuration.
All models for h1omc0 were built and installed from RCG source rather than packages. The git commit of the RCG source is tagged as 'h1_2024_12_10_variable_duotone'.
The omc0 front end and io chassis were shut down. The timing card SN S2101117 was replaced with SN S2101084, which had newer firmware version 1589 that supports alternative duotone frequencies.
The IO chassis and front-end were restarted.
A few problems were encountered:
Running puppet on the secondary bootserver overwrote the DAQ master template and some EDC INI files. The master file problem was revealed in the DAQ overview where most of the front ends were reporting no channels. The EDC overwrite showed up as too few total EDC channels.
Running the puppet configuration on the production server re-enabled DHCP for hosts that were supposed to boot from the secondary bootserver. These had to be manually commented out from the DHCPD configuration, then the service restarted.
To change the duotone to 1920 Hz, edit the h1ipomc0 model, set 'duotone_frequency=1920' in the parameter block, save the model. Build and install the model then restart all models.
As part of WP 12239 we moved h0epics, cdsvmscript1, epics-burt, autoburt off of the cds0proxmox hypervisor. This resulted in a few minutes of down time for the dust monitor IOC around 8:19am localtime. After this we done we were able to look at cdsproxmox. Its boot drive had failed. After some help from Fil we got the drives replaced and reinstalled proxmox ve on cdsproxmox. We adjusted DNS and renamed the box cdsproxmox0. Some notes: * This is now in a temporary state. We aim to retire this hardware by or at the end of O4. New hypervisor computers are being procured. As such we did not provision much storage on this. Just enough to run the hypervisor, relying on the shared storage layer to handle the VM. * As per the proxmox administrators guild we removed cdsproxmox from the cluster prior to attaching it back as cdsproxmox0.
to change a disk image name in proxmox,
Get the numeric id for the VM from the web interface
Turn off the VM
edit /etc/pve/nodes/<hypervisor hostname>/qemu-server/<id>.conf
Change the disk image name and save the file.
Find the disk image file and change its name also.
Restart the VM. It will load from the renamed file.
Closes FAMIS#28383, last checked 81688
Once again, the coherence for ITMX bias drive bias off is below the coherence threshold - this time the coherence is 0.01, much lower than the threshold of 0.1, so there are once again no new analyzed measurements for ITMX.
Checked that the measurements for ITMX bias are running and the same magnitude for bias in and bias off. They are at a lower magnitude than the quadrant injection on the ESDAMON/LVEASDAMON so we could think about increasing the magnitude if we are not happy with the 81688 no charge build-up on the test mass conclusion. We also still have a pending "to do" from Vlad in 79597: explicitly cast DARM data to np.float64
I've used the pico to center IM4 trans QPD. See alogs about this history: 80604 78962 78943 78856
After difficulty with input alignment shifts in the spring, we centered IM4 with the hopes that we could use that as a reference for any future input alignment shifts to reset the IMs. However this didn't work well when the PMC was swapped. Now we are recentering IM4 trans to get this new reference since we've been operating here for a while.
TJ, Camilla. WP12162. Continuing work in 76030
We installed a CFCS11-A adjustable SMA fiber collimator on the 50um fiber for the ETMX HWS fiber coupled LED source M530F2. This improved the size of the beam but it was still too large at the edge of the collimators range.
We removed HWS-L3 (D1800270) which made the beam a better size but still a little large ~ 10-15mm diameter at the periscope. We couldn't see a change in the beam by adjusting HWS-L2 (on translation stage). We aligned the system using the retrofection of the ALS beam: aligned ALS beam to output coup;er with HWS-MS1 and then adjusted the fiber collimator (mounted in mirror mount) to aligned the HWS output to the final iris. Confirmed we were getting a reflection of ETMX by mis-aligning it ~20urad. Note that the beam looks cleaner than usual photo, probably because the GV is closed so we are getting no retroflection of ITMX. Set frequecy to 1Hz. Replaced the mask photo and started the HWS code with new references.
The ETMX HWS 520nm beam in now injected into the vacuum where it hasn't been for ~months. Tagging DetChar.
To do: measure the beam profile of beam out of fiber and re- calculate an imaging solution of the ETMX HR surface.
This afternoon, I went back down the EX and measured the beam profile of the beam out of the collimator. Results attached. Photos of beam profile after source, after HWP and after BS attached.
GV5 and GV7 were soft closed at ~10am local time today to facilitate equipment craning for crane inspections. They were opened at ~11am local time at the completion of the vertex crane inspection.
This morning, Ryan let me know that he got a "Check PSL chiller" verbal alarm. Upon checking the chiller I saw the water level was a little low but not at minimum, but the usual oscillations in the water level as the chiller does its thing were getting a little close; this is likely what caused the alarm. I added 175 mL of water to the chiller to bring the water level back up to the MAX line.
Closes FAMIS27804
CO2X was found at 29.5, i added 200ml to get it to right under the MAX line at 30.3.
CO2Y was found at 8, I added 250ml to get it to just under the MAX line at 9.4.
Early Monday morning I was alerted to the lag compressor alarm on the instrument air compressor. Both compressors were running and the tank pressure was increasing until it reached cut-out pressure. I inspected for any signs of an air leak but didn't find anything obvious. I observed the compressor and found that when the air dryer swapped desiccant towers it began blowing down a lot of air. The poly-tube connections beneath each dryer had a build-up of ice. I cleared the ice out as best as I could and observed the dryer for another several cycles, but the problem did not reoccur. We will continue to monitor the performance of the air dryer.
We added some code to recognize the more revent timing board firmware revisions.
Tue Dec 10 10:10:41 2024 INFO: Fill completed in 10min 38secs
Today's TC-mins only just exceeded the -70C trip level (-75C,-74C). For tomorrow's fill I've increased the trip to -65C.
TITLE: 12/10 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Earthquake
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
IFO is LOCKING and in ENVIRONMENT due to EARTHQUAKE.
We had a few good hours of locking today but as with yesterday, earthquakes have been rampant. Here are details:
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:03 | HAZ | LVEA IS LASER HAZARD | LVEA | YES | LVEA IS LASER HAZARD | 06:09 |
17:08 | FAC | Karen | MY | N | Technical Cleaning | 17:42 |
17:09 | PEM | Robert | LVEA | YES | Viewport setup | 18:09 |
19:54 | PEM | Robert | LVEA | YES | Recording lock acquisition from viewport | 20:13 |
21:59 | CDS | Erik, Jonathan | MSR | N | VM Server Setup | 23:58 |
22:34 | FAC | Tyler | EX | N | Cryopump check | 21:34 |
22:48 | EE | Fil | Recieving Rollup | N | Item transport | 23:48 |
00:07 | CDS | Jonathan, Erik | LVEA | remote | Moving virtual machines | 00:35 |
00:08 | PCAL | Tony | PCAL | y(local) | Getting sttuff ready for tomorrow | 00:29 |
00:22 | PEM | Robert | LVEA | YES | Putting viewport covers back on | 01:22 |
Attached is DARM for the no SQZ test time (~10 minute averages). It seems like the noise stopped before the test started. We are seeing worse DARM around 20Hz with SQZ.
Camilla C, TJ S
This morning we had another period where our range was fluctuating almost 40Mpc, previously seen on Dec 1 (alog81587) and further back in May (alog78089). Camilla and decided to turn off both TCS CO2s for a short period just to completely rule them out, since previously there was correlation with these range dips and a TCS ISS channel. We saw no positive change in DARM during this short duration test, but we didn't want to go too long and lose lock. CO2s were requested to have no output power from 16:12:30-16:14:30UTC
The past times that we have seen this range loss, the H1:TCS-ITMX_CO2_ISS_CTRL2_OUT_DQ channel and the H1:SQZ-FC_LSC_DOF2_OUT_DQ channel had noise that correlated to the loss, but theISS channel showed nothing different this time (attachment 2). We also were in a state of no squeezing at the time. So it's possible that this is a completely different type of range loss.
DetChar, could we run Lasso or check on HVETO for a period during the morning lock with our noisy range?
Here is a link to a lasso run during this time period. The two channels with the highest coefficients are a midstation channel H1:PEM-MY_RELHUM_ROOF_WEATHER.mean and a HEPI pump channel H1:HPI-PUMP_L0_CONTROL_VOUT.mean.