Tue May 28 10:08:45 2024 INFO: Fill completed in 8min 41secs
Travis confirmed a good fill curbside.
This morning Camilla and I started to realign the corner station HWS table to our new SR3 position. After doing some checks on table we went into the control room where we found out that we will be moving SR3 again, so we've deferred this work until we know that SR3 is in a final position. Below are links to procedures, some notes Camilla took, and other commentary to help us when we do this next.
T1700163 - Procedure to align with invac picos, but mostly only M1.
LLOalog59625 - LLO alignment using above procedure
Presentation slides on pico alignment with both M1 & M2
LHOalog62527 - April 2022 LHO alignment with these picos
Notes from today:
ITMX ALS beam was hitting top periscope mirror and missing bottom mirror. ITM ALS was hitting both mirrors but mis-centered on bottom periscope mirror.
Started looking at ITMX camera, removed both plates.
SR3 Yaw, old alignment -150, new alignment 120.
Checking beam on HWS using SR3:
Beam there at -150, Beam gone -190, -170 plateau/edge, -75 plateau/edge
Center between both plateau -120. Checked we could see ITMY HWS beam on camera here.
Went towards optimal SR3, SR3 Y at -60
Pico#7 M2. X more negative. Started at 0, went to -500, where our average pixel values back to plateau/ before clipping on other side.
SR3Y to -40
Pico#7 X —1500
SR3Y to -10
Pico#7 X —3090
Now have hard edge in left side of camera picture.
Moved #8 M1 from 0 to 500 in X. (Realized we meant to be moving M1 the whole time)
Reverted so all picos are 0,0. SR3 Y back to original +120.
Can see some bright spots on the ITMX camera that remain when ITMX is misaligned (screenshot). We should check for on table beams getting to the camera, might need to add beam blocks. Also, seeing multiple point diffraction on the streamed image, this has been dust in the past so it might be a good time to check cleanliness on the optics while we are on table next.
Pictures of the OMC IO Chassis.
DCC-D1301004 has been updated
LLO noted a 10 Hz comb that was attributed to the IRIG B monitor channel at the end station connected to the PEM chassis. (https://alog.ligo-la.caltech.edu/aLOG/index.php?callRep=71217)
LHO has agreed to remove this signal for a week to see how it impacts our DARM signal.
EX and EY cables have been disconnected.
Disconnection times
EX | GPS 1400946722 | 15:51:44 Tue 28 May 2024 UTC |
EY | GPS 1400947702 | 16:08:04 Tue 28 May 2024 UTC |
I checked H1:CAL-PCALX_IRIGB_DQ at gps=1400946722 and H1:CAL-PCALY_IRIGB_DQ at gps=1400947702. From 10 seconds prior to the cable disconnection to 1 sec before the cable disconnection, IRIG-B code in these channels agreed with the time stamp after taking into account the leap second offset (18 sec currently).
Note that the offset is there because the IRIG-B output from the CNS-II witness GPS clock ignores leap seconds.
I fixed things in https://svn.ligo.caltech.edu/svn/aligocalibration/trunk/Common/Scripts/Timing/ so that they run in the control room with modern gpstime package and also the offset is not hard coded. I committed the changes.
Please reconnect the cable soon so we have independent witness signals of the time stamp. There could be a better implementation but we need the current ones until a proper fix is proposed, approved and implemented.
I have checked the weekly Fscans to look for similar 1 Hz and 10 Hz combs in the H1 data (which we haven't see in the H1 O4 data thus far), or any obvious changes in the H1 spectral artifacts occurring due to the configuration change May 28. I do not see any changes due to this configuration change. This may be because the coupling from the timing IRIG-B signal may be lower at LHO than it is at LLO. I do notice that there is some change around the beginning of May 2024 that the number of line artifacts seems to increase; this should be investigated further. Attached are two figures showing the trend of 1 Hz and 10 Hz comb, where the black points are the average comb power and colored points are the individual comb frequency power values; color of the individual points indicates the frequency. Note that there is no change in the last black data point (the only full-1-week Fscan so far).
The IRIGB cables at EX and EY have been reconnected (PEM AA Chassis CH31).
As per WP 11871 I updated the x509 certificates for the local ldap servers. This is regular maintenance and should not have been noticed by any users. I updated the secondary system first and found some puppet issues that I fixed before updating the other servers. This mainly involved putting the CA certificate chain in the certificate file for the slapd process.
FAMIS 20030
No major events this week other than the continued rise in PMC reflected power, which we plan to investigate more in-depth in the enclosure today.
Workstations were updated and rebooted. This was an OS packages upgrade. Conda packages were not upgraded.
I've made a program that takes in a filter channel name and start/stop times, reads over the raw data, and creates a visual interpretation of which filters are being turned ON and OFF and at what times, down within the second. It can be found at /ligo/gitcommon/filterBankChanges/filter_bank_changes.py. The attachment gives an example of what the current output looks like and what everything means.
Unfortunately, since my program reads raw data, it can take a long time to run, depending on the length of time and the amount of filter modules that change during that time period. I would recommend not running it for time spans of more than a day if you don't want to wait for more than ~1.5 minutes.
If you're needing to look at filter changes over a much longer period of time, Dave created a program fmscan with minute trends that can handle that (77899)! A quick overview of the differences between our two scripts is that fmscan can look over long stretches of filter module data, but is only precise to the minute, while filter_bank_changes is precise within the second but would be impractical to run over the large timespans that fmscan can.
Similar to his program, we'll be adding in an option to run filter_bank_changes.py with a right click and Execute on filter medm screens. I'm also working on having the script grab the filter modules' names to put into the right spots in the pop-up image.
TITLE: 05/28 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
INCOMING OPERATOR: Ryan S
SHIFT SUMMARY: Still Observing and have been Locked for almost 33 hours. We had the superevent S240527fv a few minutes into the start of my shift, but besides that it's been a super quiet evening. As a head up/reminder, the PSL diffracted power is low again, so it'll need to be touched up before relocking after maintainance.
LOG:
23:00 Observing
23:09 Superevent S240527fv
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
23:53 | PCAL | Tony | PCAL Lab | y (local) | PCAL experiment | 00:30 |
Observing and Locked for 28.5 hours. Quiet evening
TITLE: 05/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 154Mpc
INCOMING OPERATOR: Oli
SHIFT SUMMARY:
H1 has been locked for over 24 hours and Observing all day
Superevent candidate S2405267FV 23:09 UTC
LOG:
Start Time | System | Name | Location | Lazer_Haz | Task | Time End |
---|---|---|---|---|---|---|
20:12 | Tour | Mike & Amber | Ctrl Rm & OverPass | N | Giving a tour | 21:13 |
TITLE: 05/27 Eve Shift: 2300-0800 UTC (1600-0100 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
OUTGOING OPERATOR: Tony
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 13mph Gusts, 9mph 5min avg
Primary useism: 0.04 μm/s
Secondary useism: 0.08 μm/s
QUICK SUMMARY:
Observing and have been locked for 24 hours and everything is doing well. We just got Superevent S240527fv with both Tony and I on shift!
TITLE: 05/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 153Mpc
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 19mph Gusts, 16mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
H1 Has been locked and Observing all day.
Ive been seeing ranges in the low 160s today which is great!
GRB-Short E487675 18:34 UTC
Superevent candidate S240527en
PSL Weekly Status Update Famis 26245
Laser Status:
NPRO output power is 1.819W (nominal ~2W)
AMP1 output power is 66.51W (nominal ~70W)
AMP2 output power is 137.7W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PDWD watchdog is GREEN
PMC:
It has been locked 54 days, 21 hr 43 minutes
Reflected power = 19.44W
Transmitted power = 107.0W
PowerSum = 126.4W
FSS:
It has been locked for 0 days 17 hr and 35 min
TPD[V] = 0.89V
ISS:
The diffracted power is around 2.6%
Last saturation event was 0 days 17 hours and 34 minutes ago
Possible Issues:
PMC reflected power is high
Mon May 27 10:08:59 2024 INFO: Fill completed in 8min 55secs
Link to the full report: Report
Summary:
TITLE: 05/27 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 8mph Gusts, 5mph 5min avg
Primary useism: 0.01 μm/s
Secondary useism: 0.07 μm/s
QUICK SUMMARY:
H1 is currently locked and Observing with good range.
On May 13th (as part of my DQ shift), I noticed a line of 4kHz glitches in the glitchgram, and they have appeared every day since then. Checking the summary pages revealed they’ve been intermittently present since Apr 8, 2024 but not previously (I checked from Dec 1, 2023 onwards). Fig. 1 shows an example glitchgram from May 13th. Figs. 2-4 show Omegascans for some of these glitches. I also created a spreadsheet to characterize these glitches, and did not find any pattern or regularity to their appearance.
I checked the aLogs for possible sources and found the SQZ angle was being tuned around the time the glitches started (aLog). I then compared the timing of the glitches to the SQZ: FC ASC channels and found a correlation between the deviation in the output and the strength of the glitches. See Figs 5-7 for examples, as well as the spreadsheet. I will be meeting with the LSC fellows on Tuesday to discuss these.
Since May 24, the 4kHz glitches looked somewhat mitigated.
The correlation between the deviations of SQZ channels and the loud glitches at 4kHz has been hardly seen since May 24.
(The attachments are Omicron trigger plot and SQZ FC ASC channel on May 26)
There were SQZ commissioning works on May 24, and there are 2 alogs (alog 77980, alog 78033) related to SQZ alignment and the low frequency noises.
The 4kHz glitches are back on May 29, and the correlation between the glitches and the deviation of SQZ FC ASC channels looks apparent again.
JeffK, FranciscoL [LouisD remote]
Finished characterizing the spare OMC DCPD whitening chassis (D2200215), S2300002.
Noise is matching the expected performance. Transfer Functions look normal.
A fit of the TF data is pending and will be added as a comment.
Here are the zpk fits for the OMC DCPD data. The PDF Report is attached. OMC DCPD A: Whitening ON: zeros: [0.993694] Hz poles: [4.467514e+04 9.871288e+00] Hz Whitening OFF: zeros: [] Hz poles: [44628.188186] Hz OMC DCPD B: Whitening ON: zeros: [1.002049] Hz poles: [4.474433e+04 9.956902e+00] Hz Whitening OFF: zeros: [] Hz poles: [44711.662211] Hz The raw output is stored at [CalSVN]/trunk/Common/Electronics/H1/DCPDWhitening/OMCA/S2300002/20240523/Results/.