Closes FAMIS#26436, last checked 73177
BRS Driftmon values are looking good and are well within range for both BRSX and BRSY, although there has been a very slight trend downward in BRSX Driftmon over the past week, possibly related to the temperature drifting this past week, which can be seen in both ETMX and ETMY.
We found that the temp in EY was not holding steady over the weekend. The heating coil for the space was commanded on but was not running. After investigating, we found that the line voltage fuse in Phase C had blown which was keeping the variable transformer from operating. We took the variable transformer out of the circuit and swapped fan relay contacts to run the elements not controlled by the variable transformer. Heating elements are currently working and EY is slowly heating up. The heating elements controlled by the variable transformer will need to have their resistance checked for a short or an open circuit during the maintenance window.
Closes FAMIS 26215. Last checked 73664. Possible Issues: FSS TPD is low - plot attached
Laser Status:
NPRO output power is 1.812W (nominal ~2W)
AMP1 output power is 68.31W (nominal ~70W)
AMP2 output power is 137.3W (nominal 135-140W)
NPRO watchdog is GREEN
AMP1 watchdog is GREEN
AMP2 watchdog is GREEN
PMC:
It has been locked 13 days, 0 hr 55 minutes
Reflected power = 15.84W
Transmitted power = 109.9W
PowerSum = 125.8W
FSS:
It has been locked for 0 days 9 hr and 18 min
TPD[V] = 0.6423V
ISS:
The diffracted power is around 2.3%
Last saturation event was 0 days 8 hours and 3 minutes ago
Possible Issues:
FSS TPD is low - plot attached
Mon Oct 30 10:09:11 2023 INFO: Fill completed in 9min 7secs
Jordan confirmed a good fill curbside.
Jenne notes that this changing high frequency noise during the first hours of a lock since Wed 25th Oct (purple trace in 73798) may be caused by the new higher CARM gain 73738, changed on that day.
In 73798, I noted The 4.8kHz noise that suddenly changes ~ 1h40 into NLN, Jenne suggests this could be aliased down CARM gain peaking. Looking at the 64kHz channel, plot attached, noise disappears from 18.4 to 18.7kHz (v. large peak) and appears at 16.6kHz to 16.8kHz and at 21.1kHz.
We have had a weekend of shorter (~5hour) locks and two locklosses form LASER_NOISE_SUPPRESSION state#575 (73787, 73831), the state this CARM gain is changed. Maybe this gain change has made us less stable, we'll discuss today reverting it.
CARM sliders reverted back to 6dB in ISC_LOCK (svn) and loaded.
On Monday, Naoki and Sheila 73855 saw that even with the CARM gain back at 12dB, the high frequency squeezing was still bad and the optimal H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG sqz angle has to be adjusted a lot.
Maybe the CARM gain increase was effecting stability, but we don't think it wasn't causing the high frequency noise that was present in FDS, FIS but not no SQZ, plot attached. With adjustment to SQZ angle the SQZ greatly improved. It wasn't clear to us why the SQZ angle changed.
Although the overall high frequency noise was still bad once the CARM gain was reduced from 12 back down to 6, the peak around 4600Hz did disappear once the CARM gain was reverted. See attached SQZ BLRM 6 purple trace with CARM at 12 and CARM at nominal 6.
See attached high frequency plot showing peaks at 16.4kHz and 18.7kHz (purple) and thermalized around 16.6kHz (red), peaks disappeared once CARM gain was reduced ( green to blue traces). Maybe this concludes that the peaks are CARM gain peaking as Jenne suggested.
TITLE: 10/30 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 155Mpc
OUTGOING OPERATOR: Austin
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 2mph Gusts, 1mph 5min avg
Primary useism: 0.03 μm/s
Secondary useism: 0.47 μm/s
QUICK SUMMARY: IFO been in NLN for 4h40
High Pitched Fire Panel alarm in the CUR was on 15:58 to 15:00:45UTC as the FAC team worked with the fire system. Tagging DetChar
ITMY mode 5 and 8 are not damping so I will try turning gains to zero, plot attached. 73826. Tagging SUS.
ITMY mode 8 was damping with old settings rather than gain = 0 that Ryan had saved in lscparams as VIOLIN_DAMPING was not yet reloaded, will plan to do this when out of observing.
I've put Ryan's new ITMY#8 FM1+FM6+FM10, G=+0.4, settings which is damping it well. I have left ITMY#5 at 0 gain for now.
There is no noticeable change in the data quality at 14:58 (assuming there is a typo in the original log entry). I've attached a spectrogram of the strain data from 14-15UTC. There is non-stationary noise at low frequency throughout the hour and no change is visible at the time of the fire alarm.
I reloaded VIOLIN_DAMPING guardian so it will now keep ITMY#8 gain at 0 to avoid ringing up. Also changed ITMY#5 gain at 0 after talking with Rahul. Tagging SUS.
The H1 system gave me a call this morning (~10:20 UTC 10/30), but when I checked the system in NoMachine, the IFO was at INCREASE DARM OFFSET and relocking and I couldn't find any issue (perhaps the timer ran over the limit?). However, once we got to NLN (10:26), I did see that the SQZ_MANAGER guardian was having some trouble, showing the warning, SQZ ASC AS42 not on??? in the guardian message log. It would cycle between this message and going back into FDS, so I hit RESET_SQZ_ASC before trying to go back into FDS, which looked to have solved the issue. - Tagging SQZ.
Tony and I looked into this and the reason Austin was called was that the IFO had been relocking for 2 hours so the H1_MANAGER's 2 hour "wait_for_nln" timer was up. The reason for the long relocking period was a lockloss at LASER_NOISE_SUPPRESSION 575 state 1382693851, Ryan had a lockloss here this weekend too 73787.
NLN Lockloss at 8:20UTC - 1382689308
TITLE: 10/30 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 157Mpc
INCOMING OPERATOR: Austin
SHIFT SUMMARY: We stayed locked the whole shift 13:07 as of 07:00UTC, I found new settings for ITMY mode8 (FM1+FM6+FM10, G=+0.4) and possibly ITMY mode5/6 (FM6+FM8+FM10, G=-0.02) I left mode 5s gain at zero since I'm not too sure about it yet.
05:21 I went to comissioning to run a calibration measurement alog73828
05:53: Back to Observing
INFO | bb output: /ligo/groups/cal/H1/measurements/PCALY2DARM_BB/PCALY2DARM_BB_20231030T052240Z.xml
Simulines:
GPS start: 1382678928.729304
GPS stop: 1382680256.772195
2023-10-30 05:50:38,627 | INFO | File written out to: /ligo/groups/cal/H1/measurements/DARMOLG_SS/DARMOLG_SS_20231030T052832Z.hdf5
2023-10-30 05:50:38,648 | INFO | File written out to: /ligo/groups/cal/H1/measurements/PCALY2DARM_SS/PCALY2DARM_SS_20231030T052832Z.hdf5
2023-10-30 05:50:38,659 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L1_SS/SUSETMX_L1_SS_20231030T052832Z.hdf5
2023-10-30 05:50:38,671 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L2_SS/SUSETMX_L2_SS_20231030T052832Z.hdf5
2023-10-30 05:50:38,683 | INFO | File written out to: /ligo/groups/cal/H1/measurements/SUSETMX_L3_SS/SUSETMX_L3_SS_20231030T052832Z.hdf5
ICE default IO error handler doing an exit(), pid = 2479290, errno = 32
STATE of H1: Observing at 155Mpc
TITLE: 10/29 Eve Shift: 23:00-07:00 UTC (16:00-00:00 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Camilla (DAY)
CURRENT ENVIRONMENT:
SEI_ENV state: CALM
Wind: 6mph Gusts, 3mph 5min avg
Primary useism: 0.02 μm/s
Secondary useism: 0.20 μm/s
QUICK SUMMARY:
The action of loading new filter modules likely caused an (expected) interruption in the output of the real-time model on h1lsc0 - this is the cause of the transient DAQ checksum (CRC) error
TITLE: 10/29 Day Shift: 15:00-23:00 UTC (08:00-16:00 PST), all times posted in UTC
STATE of H1: Observing at 156Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Didn't take calibration measurements as we weren't thermalized, if LLO is down Ryan with take it.
LOG: ITMY mode8 was increasing so I turned off the gain this morning and then used half the gain (0.1) with the opposite sign to slowly damp it this afternoon. Ryan tried this in 73812. Nothing changed in lscparams but Ryan thinks we should save lscparams to keep gain at zero if we have a lockloss. Tagging SUS.
SHG PZT volts was close to railed at 100V with temperature drifts, see attached. We do have a checker for this when the IFO isn't in NLN 71595 but this drift was quick. It automatically relocks at a better PZT value (56V, middle of range). It unlocked at 22:33UTC and we were back in observing at 22:46.
SHG was v. quick to relock but the FC was slow, not really finding IR ad failing at TRANSITION_TO_IR. it took ~7 attempts to lock. I paused SQZ_FC in DOWN after 4 failed lock attempts an touch FC2 as in 73797 but the green FC trans was already good. After 3 more attempts it locked itself.
After 3h56 at NLN. 1382630509 this is the longest lock we've had in 24 hours.
Similar glitches in DARM/ETMX in the 750ms before lockloss, see attached and compare to 73817, 73578, 73539.
After 4 locklosses at LOCKING_ALS, started initial alignment at 16:38UTC, finished at 16:58UTC. We had to go though PRMI locking even after initial alignment. NLN at 17:52UTC. Observing at 18:16UTC (24 minutes for ADS to converge).
High frequency SQZ is again 73798 starting bad, with DARM > 3kHz worse than NO_SQZ. Touched OPO temperature, attached, didn't help.
In this lock and the last I turned off ITMY Mode 8 gain as Ryan did 73812 as it is slowly ringing up, tagging SUS.
Took MICH FF measurements following instructions in lsc/h1/scripts/feedforward/README.md but running MICHFF_excitation_ETMYpum.xml. Data saved in lsc/h1/scripts/feedforward/ as .txt files. Last tuned MICH in 73420.
I saved a new filter in FM6 as 27-10-23-b (red trace) but it made the MICH coupling worse between 20 and 80Hz so we left the original (pink trace). We could try to re-fit the data to load in Wednesday's commissioning period.
I re-fit this data and think i have a better filter saved (not loaded) in FM3 as "27-10-23-a". We could try this during a commissioning period this week if we wanted to try to further improve MICH.
Tried this FM3 FF 2023/11/14 16:04:30UTC to 16:06:30UTC. It did not cause a lockloss. I did not run the MICH comparism plot but DARM looked slightly worse. Plot attached.
From 16:07:05, I tried FM6 which is the currently installed MICH FF (FM5) without the 17.7Hz feature 74139.
I tried to test a new MICH FF FM3 Camilla made. First I measured the current MICH FF FM5 as shown in the attached figure. The pink and black curves are the current MICH FF FM5 on 20231027 and 20231103, respectively. The current MICH FF gets worse between 30-80 Hz in a week. The MICH FF on 20231103 was measured after 6.5 hours into lock. Then I ramped the MICH FF gain to 0 and turned off FM5 and turned on FM3. After I ramped the MICH FF gain to 1, a lockloss happened immediately.
Sorry that this caused the 1383077917 lockloss.
Unsure why this FM3 would be unstable. Lockloss occurred 10 seconds after MICHFF had finished ramping on (13s - 3sec ramp time). FM3 MICH_FF looks to be outputting ~ factor of 2 higher than the current FM5 filter. Don't see any obvious instabilities in the 10seconds before the lockloss.
LSC and ASC plots attached. I wonder if the lockloss was just badly timed. We could attempt to repeat this before our Tuesday Maintenance period.
Sheila, Naoki, Camilla
We took FDS, Anti-SQZ and Mean SQZ data at different NLGs: 14, 17, 43, 82 and 123. We think we see evidence of frequency noise. We see less squeezing at high NLGs: around 4.5dB of SQZ at NLG < 43.1 but at NLG 81.9 and 122.8, squeezing at 1kHz reduced to 3.6 and 3.1dB. Attached Plot.
Each time we optimized SQZ angle around 1kHz. Because of this, the low frequency increased noise at high NLGs could be due to incorrect rotation, rather than phase noise.
|
Time (UTC)
|
Demod phase
|
DTT ref#
|
NLG |
SQZ dB at 900Hz
|
|
|
FDS
|
21:23:40-21:26:00
|
152.25
|
3
|
13.9
|
-4.3
|
|
FDAS
|
21:27:30- 21:32:30
|
228.10
|
6
|
13.9
|
15.2
|
|
Mean
|
21:33:47- 21:38:00
|
-
|
7
|
13.9
|
12.7
|
|
No SQZ
|
21:40:00 -21:44:50
|
-
|
0
|
-
|
|
|
FDS (not optimized)
|
21:51:20 -21:54:30
|
164.57
|
Deleted
|
43.1
|
-4
|
|
FDAS (Left ASC off)
|
21:56:00 -21:59:45
|
226.2
|
5
|
43.1
|
19.1
|
|
FDS (retake)
|
22:03:00 -22:06:13
|
167.42
|
4
|
43.1
|
-4.3
|
|
Mean
|
22:06:45 -22:09:52
|
-
|
8
|
43.1
|
16.9
|
|
FDS
|
22:37:50- 22:41:00
|
172.16
|
9
|
122.8
|
-3.1
|
|
FDAS
|
22:42:43-22:45:45
|
217.67
|
10
|
122.8
|
24.7
|
|
Mean
|
22:46:35 - 22:49:55
|
-
|
11
|
122.8
|
|
|
FDS
|
22:57:30- 22:59:40
|
169.31
|
12
|
81.9
|
-3.6
|
|
FDAS
|
23:01:20 - 23:02:55
|
219.56
|
13
|
81.9
|
22.7
|
|
Mean sqz
|
23:03:26 - 23:05:45
|
-
|
14
|
81.9
|
20
|
|
FDS (left IFO here)
|
23:19:38
|
154.14
|
15
|
17.1
|
-4.5
|
Data saved in /camilla.compton/Documents/sqz/templates/dtt/20231025_GDS_CALIB_STRAIN.xml
|
Time (UTC)
|
Unamplified OPO OPO_IR_PD_LF
(Scan Seed)
|
OPO trans (uW)
|
OPO Temp (degC)
|
Amplified (maxium) H1:OPO_IR_PD_LF
(Scan OPO PZT)
|
NLG
(Amplified / (Unamplified Peak Max - Unamplified Dark Offset))
|
|
| Peak Maximum | Dark Offset | |||||
|
21:15
|
786e-6
|
-16e-6
|
80.54 uW
|
31.446
|
0.0112
|
13.9
|
|
21:44
|
|
|
100.52
|
31.414
|
0.0346
|
43.1
|
|
22:17
|
|
|
120.56
|
31.402
|
0.160
|
199.5 (SQZ loops went unstable)
|
|
22:30
|
|
|
110.5
|
31.407
|
0.06617
|
82 (didn't use)
|
|
22:36
|
|
|
115.46
|
31.405
|
0.09853
|
122.8
|
|
22:52
|
|
|
110.5
|
31.407
|
0.0657
|
81.9
|
|
22:09
|
769e-6
|
-15e-6
|
80.5
|
31.424
|
0.0137
|
17.1
|
Vicky and Naoki did an NLG sweep on the Homodyne in 73562.
Edits to NLG table in above alog: Last measurement was at 22:09 23:09 UTC. For measuring unamplified we scanned OPO PZT and for amplified we scanned SEED PZT.
Setup for each SQZ Measurement:
Attached are fits of squeezing loss, phase noise, and technical noise to this NLG sweep on DARM: we can infer 25-30% total SQZ losses, ~25 mrad rms phase noise, with technical noise almost -12 dB below shot noise.
Losses are tracked in the SQZ wiki and gsheet. Of the 30% total inferred loss, we expect 20%: that is 7.5% injection loss, and 13.7% readout loss.
Remaining ~10% mystery loss is compatible with mode-mismatch: in sqz-omc single bounce mode scans, LHO:73696, we estimate 8-15% mismatch, and we observe the frequency-dependence of the mismatch as we vary squeezer beam profile using PSAMS: 73400, 73621.
In the fits, [loss, phase noise, technical noise] are fit to the measured SQZ and Anti-SQZ, given the measured NLG, using equations from the aoki paper. "Loss from mean sqz" is the calculated loss from the measured NLG and measured mean-sqz dB; it is not a fit, but depends on the calibration from NLG to generated SQZ dB.
Some summary slides here show the progression of our loss hunting, which are so far lining up with Sheila's projections from 73395.