Displaying reports 361-380 of 79116.Go to page Start 15 16 17 18 19 20 21 22 23 End
Reports until 14:38, Wednesday 06 November 2024
H1 PSL
jason.oberling@LIGO.ORG - posted 14:38, Wednesday 06 November 2024 - last comment - 15:51, Wednesday 06 November 2024(81107)
PSL NPRO Crystal Temperature Tuning

V. Xu, J. Oberling

With the PSL NPRO still showing signs of mode hopping, after consulting with Sheila via TeamSpeak it was decided to stop locking attempts and go out to the PSL racks to tune the NPRO crystal temperature to see if we could find a mode hop free region.  I wasn't too familiar with what we were looking for here but Vicky is, so she kindly agreed to lend a hand.  We grabbed an oscilloscope from the EE shop, some Lemo cables and adapters, and out we went.

We wanted to scan the PMC through a complete FSR so we could watch the mode content while we changed the NPRO crystal temperature.  To do this we unplugged the DC cables for PMC Trans and Refl from the PSL Monitoring Fieldbox and plugged them into the oscilloscope.  We also plugged a cable into the 50:1 HV monitor so we could trigger off of the PMC PZT ramp.  For the ramp we used the Alignment Ramp on the PMC MEDM screen, set to +/- 7V at 1 Hz.  Once we got the signals on the scope and successfully triggered, we clearly had a full FSR visible so we kept these ramp settings.  We watched things for a little bit before starting adjustments and noticed the laser frequency drift a little bit; Vicky estimated this at 1-3 GHz in 30 minutes.

We began at the crystal temperature where we left it yesterday (25.2 °C) and used the temperature control on the FSS MEDM to step the temperature down.  At ~24.85 °C we started to see clear mode hopping behavior in the scan, which got worse at 24.75 °C, see first attachment.  We had originally set the crystal temperature to 24.7 °C, so clearly we started in a mode hopping region.  We continued moving the temperature down to see where the mode hop region ended; the scan didn't start looking good again until ~24.5 °C.  We continued moving the temperature down until we maxed the slider.  This had the crystal temperature at 23.95 °C and we were still mode hop free.  So we started moving the temperature up to start mapping out the upper region (>25.2 °C).  When started seeing the bad mode hopping region right above 24.5 °C, so this matches with where we saw things on the way down.  However, it didn't clear up until the temperature was >25.1 °C, uncomfortably close to our starting value of 25.2 °C.  It looks like the spot we locked the RefCav at yesterday was right on the edge of a mode hopping region, and we had been slowly drifting in and out of it.  We continued to move the temperature up via the MEDM slider unil it maxed at a crystal temperature of 25.49 °C and saw no mode hopping behavior.  The second attachment shows some pictures from both of these good regions.

The slider was maxed but we wanted to continue mapping the upper region, especially as this area is closer to the operating temperature of the NPRO we just removed from the PSL (meaning the SQZ and ALS lasers had been happily locked in this area for all of O4 to date).  To do this we set the slider back to 0 (a crystal temperature of ~24.7 °C) and used the knob on the front panel of the NPRO power supply to adjust the crystal temperature.  We moved slowly while watching the power out of the amplifiers and NPRO, in case the temperature adjustment caused beam changes that had a negative effect on amp output.  In this way we moved the crystal temperature up to 26.75 °C and saw no mode hopping behavior.  We did, however, start to see the power out of the amplifiers drop a little.  At a crystal temperature of 26.75 °C we lost roughly 3 mW from the NPRO, ~0.5 W from Amp1, and ~1 W from Amp2.  3 mW from the NPRO results in about 0.25 W lost at Amp2, so most of the loss is likely due to mode changes in the NPRO beam caused by the different operating temperature (if this was alignment we would have lost power a good bit faster, mode matching changes I've observed are generally a good deal slower than alignment changes).  Because of this we stopped here, and set the crystal temperature to ~26.27 °C.  Still had ~139.0 W output from Amp2, so all is well here.

We then plugged everything back in and relocked the PMC; it locked without issue.  We then adjusted the crystal temperature down via the MEDM slider to find a RefCav resonance.  We did have one time where the PMC unlocked when the PZT ran out of range (the PMC PZT moves to keep the PMC locked while the laser frequency changes).  We couldn't see a 00 coming through on the PMC ramp, so we moved the crystal temperature back up until a 00 was clearly visible flashing through; at this point the PMC locked without issue.  Continuing to move the temperature down we found a RefCav resonance flash through at a crystal temperature of 26.26 °C, so we locked the RefCav.  It took a couple of passes to grab the lock (had to disable the FSS Guardian to keep it from yanking the gains around), but it did lock.  The final crystal temperature was 26.2 °C.  To finish we locked the ISS, grabbed our equipment, and left the LVEA.

We left things here for an hour and saw no signs of mode hopping again, so Daniel and Vicky started tuning the ALS and SQZ lasers to match the new NPRO frequency.  Time will tell whether or not we are in a better place in regard to NPRO mode hopping, but results so far are promising.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 15:08, Wednesday 06 November 2024 (81108)SQZ

Daniel, Vicky - We adjusted the SQZ + ALS Y/X laser crystal temps again to match the new PSL laser frequency:

  • SQZ: 27.05 C --> 26.08 C 
    • was +2.934 GHz.
    • Had to revert yesterday's laser current change 81079 to avoid mode-hopping at this new temp.
      • So today current changed 1.885 A --> 1.936 A
    • After, SQZ PMC transmisison is back to normal ~690  (yesterday it was low ~650)
  • ALS Y: 28.8(?) C  --> 27.92 C
    • was +2.8 GHz.
    • Issues:
      • Reduced laser current 1.635 A --> 1.60A to avoid ALS Y mode-hopping at this new temperature.
      • The laser crystal temperature knob is very bad, temp adjustment is difficult. Readback temp jumps around a lot as the knob is turned even very slightly. Makes tuning out of a mode hop region very difficult.
  • ALS X: 25.45 C --> 26.40 C
    • was -3.162 GHz. Adjustment was easy.

----------------------------------------------------------------------------------------------------------------------------------

Record of aux laser crystal temp changes following PSL swap:

  • 80922: 1st adjustment for O3 spare PSL freq    (last week, PSL was likely mode-hopping last weekend)
    • SQZ:    24.83 C --> 26.30 C
    • ALS Y: 28.59 C --> 30.05 C
    • ALS X: 24.8 C --> 26.62 C
  • 81074: 2nd adjustment for O3 spare PSL freq   (yesterday to avoid weekend mode-hopping, but PSL was likely intermittently mode-hopping last night)
    • SQZ:      26.31 C --> 27.02 C  
    • ALS Y:   30.05 C --> 28.85 C  (after un-clamping current)
    • ALS X:   26.62 C --> 25.44 C
  • Here,  3rd adjustment for O3 spare PSL freq   (today to tune PSL laser crystal temp further out of mode-hop free region, tuning PSL laser freq back to original O4 freq).
    • SQZ:   27.05 C --> 26.08 C    (1.936 A)
    • ALS Y: 28.8_ C  --> 27.92 C   (1.60 A)
    • ALS X: 25.45 C --> 26.40 C
camilla.compton@LIGO.ORG - 15:51, Wednesday 06 November 2024 (81109)SQZ

Vicky, Camilla. Repeated what Vicky did yesterday 81079 to get the OPO temperature back (these are instructions for when the temperature starts very far away):

  • Toggled CLF to SEED in.
  • Locked PMc and SHG, took SQZ_OPO_LR to LOCKED (locks on GREEN)
  • Adjusted temperature to maximum seed signal with OPO_IR_PD_LF_OUT, got a bit confused here, either went past the best temp or locked on a 2nd order mode.  
  • Toggle to CLF and re-optimize, we got OPO_IR_PD_LF_OUT to ~200e-6 and CLF RF6 increased to -13. 

We then took SQZ_MANAGER to DOWN and FDS_READY_IFO. Got there with no issues, we'll want to re-optimize the temperature right before we go to observing. 

Starting OPO temp: 31.25. Ending temp: 31.42, this is closer to the temperature we had straight after the NPRO swap.

H1 TCS
oli.patane@LIGO.ORG - posted 14:01, Wednesday 06 November 2024 - last comment - 08:38, Tuesday 12 November 2024(81106)
TCS Monthly Trends FAMIS

Closes FAMIS#28454, last checked 80607

CO2 trends looking good (ndscope1)

HWS trends looking good (ndscope2)

You can see in the trends when the ITMY laser was swapped about 15-16 days ago.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 08:39, Thursday 07 November 2024 (81122)

Trend shows that the ITMY HWS code stopped running. I restarted it.

camilla.compton@LIGO.ORG - 08:38, Tuesday 12 November 2024 (81220)CDS

Erik, Camilla. We've been seeing that the code running on h1hwsmsr1 (ITMY) kept stopping after ~1hours with a "Fatal IO error 25" (Erik said related to a display) attached.

We checked that memory is fine of h1hwsmsr1. Erik troubleshooted this back to matplotlib trying to make a plot and failing as there was no display to make the plot on. State3.py calls get_wf_new_center() from hws_gradtools.py which calls get_extrema_from_gradients() which makes a contour plot, it's trying to make this plot and thinks there's a display but then can't plot it.  This error isn't happening on h1hwsmsr (ITMX).   I had ssh'ed into h1hwsmsr1 using -Y -C options (allowing the stream image to show), but Erik found this was making the session think there was a display when there wasn't.

Fix: quitting tmux session, logging in without options (ssh controls@h1hwsmsr1),  and starting code again. The code has now been running fine for the last 18 hours.

Images attached to this comment
H1 PSL
oli.patane@LIGO.ORG - posted 13:04, Wednesday 06 November 2024 (81105)
PSL Status Report Weekly FAMIS

Closes FAMIS#26316, last checked 80936


Laser Status:
    NPRO output power is 1.9W (nominal ~2W)
    AMP1 output power is 68.62W (nominal ~70W)
    AMP2 output power is 138.7W (nominal 135-140W)
    NPRO watchdog is GREEN
    AMP1 watchdog is GREEN
    AMP2 watchdog is GREEN
    PDWD watchdog is GREEN

PMC:
    It has been locked 0 days, 0 hr 25 minutes
    Reflected power = 17.47W
    Transmitted power = 108.7W
    PowerSum = 126.2W

FSS:
    It has been locked for 0 days 0 hr and 25 min
    TPD[V] = 0.8925V

ISS:
    The diffracted power is around 3.6%
    Last saturation event was 0 days 0 hours and 33 minutes ago


Possible Issues:
    ISS diffracted power is high

H1 General
corey.gray@LIGO.ORG - posted 11:26, Wednesday 06 November 2024 (81104)
Mid-ish Shift Status: H1 IDLE for PSL Rack Work

Currently, Jason and Vicky are out on the floor for rack measurements and possible new laser frequency change (which will require changes for the ALS & SQZ lasers).

LHO VE
david.barker@LIGO.ORG - posted 10:42, Wednesday 06 November 2024 (81101)
Wed CP1 Fill

Wed Nov 06 10:04:05 2024 INFO: Fill completed in 4min 2secs

Travis confirmed a good fill curbside.

Images attached to this report
H1 SEI
filiberto.clara@LIGO.ORG - posted 09:12, Wednesday 06 November 2024 - last comment - 08:36, Friday 08 November 2024(81097)
3T Gurulp seismometers Huddle Test - LVEA

WP 12139

Entry for work done on 11/5/2024

Two 3T seismometers were installed in the LVEA Biergarten next to the PEM area. Signals are routed through the SUS-R3 PEM patch panel into the CER. Signals are connected to PEM AA chassis 4 and 5.

F. Clara, J. Warner

Comments related to this report
jim.warner@LIGO.ORG - 15:12, Wednesday 06 November 2024 (81103)

There are 2 of these plugged in, they are 3 axis seismometers, serial numbers T3611670 and T3611672. The first one is plugged into ports 4,5 & 6 on the PEM patch panel, the second is plugged into ports 7,8  & 9. In the CER, T3611670 is plugged into ports 21,22 & 23 on PEM ADC5 and T3611672 is plugged into ports 27,28 & 29 on PEM ADC4. In the DAQ, these channels are H1:PEM-CS_ADC_5_{20,21,22}_2K_OUT_DQ and H1:PEM-CS_ADC_4_{26,27,28}_2K_OUT_DQ. So far the seismometers look like they are giving pretty good data, similar to the STS and the old PEM Guralp in the biergarten. The seismometers are oriented so that the "north" marking on the their carry handles is pointed down the X-arm, as best as I could with eyeballing it. 

I need to figure out the calibrations, but it looks like there is almost exactly -15db difference between these new sensors and the old PEM Guralp, but maybe the signal chain isn't exatly the same.

Attached images compare the 3T's to the ITMY STS and the existing PEM Guralp in the biergarten. First image compares asds for each seismometer. Shapes are pretty similar below 40 hz, but above that they all have very different responses.  I don't know what the PEM guralp is calibrated to, if anything, it looks ~10x lower than the STS (which calibrated to nm/s). The 3T's are about 5x lower than the PEM sensor, so ~50x lower than the STS.

Second image are tfs for the X,Y & Z dofs between the 3T's and the STS. These are just passive tfs between the STS and 3T's to see if the have similar response to ground motion These are generally pretty flat between .1 and 10hz. The X & Y dofs seem pretty consistent, the Z tfs are different starting around 10hz. I should go and check that the feet are locked and have similar extension.

Third image are tfs between the 3T's and the exist PEM Guralp. Pretty similar to the tfs with the STS, horizontal dofs all look very similar, flat between .1 and 10hz, but the ADC4 sensor has a different vertical response. 

I'll look at noise floors next.

 

 

Images attached to this comment
jim.warner@LIGO.ORG - 17:06, Thursday 07 November 2024 (81134)

The noise for these seems almost comparable to T240s, above 100 mhz, less certain about noise below 100mhz, these don't have thermal enclosures like the other ground sensors. Using mccs2 in matlab to remove all the coherent noise with the STS  and PEM Guralp, the residual noise is pretty close to the T240 spec noise in SEI_sensor_noise. Attached plots are the asds and residuals after finding a scale factor that matches the 3T asds to the calibrated ITMY STS asds. Solid lines are the 3T asd, dashed lines are the residuals after coherent subtraction. 

Images attached to this comment
brian.lantz@LIGO.ORG - 08:36, Friday 08 November 2024 (81141)

For convenience I've attachd the response of the T240 and the STS-2 from the manuals.

These instruments both have a steep fall-off above 50-60 Hz.
This is not compensated in the realtime filters, as it would just add lots of noise at high frequency, and then we'd have to roll it off again so it doesn't add lots of nasty noise.

T240 user guide - pg 45
https://dcc.ligo.org/LIGO-E1500379
The T240 response is pretty flat up to 10 Hz, has a peak at ~ 50 Hz, then falls off rapidly.
 

STS-2 manual - pg 7
https://dcc.ligo.org/LIGO-E2300142
Likewise the STS-2 response is pretty flat up to 10 Hz, then there is ripple, and a steep falloff above 60 Hz

Images attached to this comment
H1 CDS (SEI)
filiberto.clara@LIGO.ORG - posted 09:00, Wednesday 06 November 2024 (81095)
NN Huddle Test Setup

WP 12189

Entry for work done on 11/5/2024

Cabling and electronics were installed for a huddle test of HS-1 seismometers. A BNC cable was pulled from the Biergarten to the SUS-R3 PEM patch panel. A L4C Interface chassis S1600256 was installed in TCS-C1 rack, U16. Signal will be routed to a PEM AA chassis. Chassis is powered off. Sensor will be connected next Tuesday.

LHO General
corey.gray@LIGO.ORG - posted 07:42, Wednesday 06 November 2024 - last comment - 10:26, Wednesday 06 November 2024(81092)
Wed DAY Ops Transition

TITLE: 11/06 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 6mph Gusts, 4mph 3min avg
    Primary useism: 0.16 μm/s
    Secondary useism: 0.18 μm/s
QUICK SUMMARY:

H1 had a 6.5hr lock overnight (after waking up Ibrahim), and just went down about 30min ago due to a M5.9 EQ in the Solomon Islands (switching Observatory Mode to EARTHQUAKE).  ISC LOCK is currently running an automatic alignment. 

Microseism continues it's trend downward.  Winds are negligible.

Comments related to this report
camilla.compton@LIGO.ORG - 08:36, Wednesday 06 November 2024 (81094)PSL

As noted by Jason and Ryan S yesterday 81073, the power out of the PSL is again not stable. Fluctuating from 1.2 to 2.7W. There's not a known solution for this so we will sit with ISC_LOCK in IDLE until this stabilizes.

The IMC is saying it's locked but the power out if wildly fluctuating from 2W to 50W so we took it to OFFLINE. Plot.

Images attached to this comment
camilla.compton@LIGO.ORG - 09:17, Wednesday 06 November 2024 (81098)

At 17:10UTC this NPRO behavior calmed down and Jason locked the ISS and FSS. I took LASER_PWR to POWER_2W then IMC_LOCK to LOCKED then started the main locking process.

camilla.compton@LIGO.ORG - 10:26, Wednesday 06 November 2024 (81100)PSL

Attached plot showing comparison to Jason/Oli's plot from 81099. You can see:

  • NPRO power before the PMC (top right LASER_AMP2_PWR) is stable.
  • Even after FSS and ISS is taken DOWN, the REFL and TRANS power from the OMC are not stable.
  • We are unsure if the IMC IM4 TRANS power is physical or if there's some strange calibration happening here to output 50W out when the IMC. This plot shows that ASC_A IMC_MC2_TRANS and Keita's new AS_AIR channel do see power fluctuations at this time.
Images attached to this comment
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 00:47, Wednesday 06 November 2024 (81091)
OPS OWL Shift Summary

IFO is in NLN and OBSERVING as of 08:44 UTC

IFO was in MOVE_SPOTS when I was called, which I assume was due to the timer going over 2 hrs. The stall at this state was due to ADS Y signals not converging. While investigating why this may be, they reached the threshold necessary for the next step. I babysat Guardian until it reached OBSERVING. Essentially, I did nothing.

H1 General (ISC, PSL)
oli.patane@LIGO.ORG - posted 22:45, Tuesday 05 November 2024 - last comment - 09:18, Wednesday 06 November 2024(81090)
Ops Eve Shift End

TITLE: 11/06 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ibrahim
SHIFT SUMMARY: Currently trying to relock and at ENGAGE_DRMI_ASC after finishing an initial alignment. It's been a rough shift, with issues relocking.

Trouble with locking:

Good news is that we haven't had the new AS AIR PD(81080), PEM-CS_ADC_5_19_2K_OUT_DQ, saturate during the last three locklosses! The highest number of counts we've gotten have been around 16k counts. I have also put in a feature request for the integral of the channel's peak at lockloss to be calculated and put onto the lockloss pages, and assigned it to me (Issue#223). (ISC)
LOG:

00:00UTC In DRMI and trying to relock after a 28 minute lock
00:44 NOMINAL_LOW_NOISE
00:50 Observing

00:58 Lockloss
    - 01:21 Lockloss from CARM_5_PICOMETERS, and FSS wouldn't let the IMC lock?
        - Started getting giant FSS oscillations while it was trying to lock/was locked(attachment2)
        - Went to DOWN, went to IDLE, tried toggling FSS autolock, nothing worked
        - Put detector in DOWN for 10mins and then tried again
        - Seemed fine??
    - DRMI locklosses during ENGAGE_DRMI_ASC x 3
        - BS saturations and beams on AS AIR visibly start shaking from end of PREP until DRMI unlocks
    - 02:12 Started an initial alignment
        - At MICH, started having issues with it not being able to get past MICH_BRIGHT_ALIGN (attachment4)
        - PSL table flashing all over the place
        - Took ifo to DOWN, waited a while, then tried again with manual IA and no issues
    - 02:37 Initial alignment done, relocking
03:43 NOMINAL_LOW_NOISE
03:45 Observing
    - 03:57 Out of Observing to adjust OPO temperature
04:08 Back into Observing

04:28 Lockloss
    - Took 15 minutes for IMC to relock
    - Started relocking with H1 Manager
    - ENGAGE_DRMI_ASC had the beam spot shaking and BS saturations again, sometimes leading to LL x 4
    - I started an IA
    - IA done, relocking

Images attached to this report
Comments related to this report
jason.oberling@LIGO.ORG - 09:18, Wednesday 06 November 2024 (81099)

Oli's problems last night were due to the issues with apparent NPRO mode hopping noted here.  I've attached 2 trends, one showing the 20 minute range with t0 at 01:36 UTC as shown in Oli's first 3 plots, the other over the 5 hour range from when the Oli first noted the issue until 06:06 UTC.  As can be seen, the ISS starts oscillating when PMC Refl begins increasing, which in turn causes PMC Trans to decrease.  The ISS is doing what it's supposed to, as it interprets the drop in PMC Trans as a call for less diffraction.  Once the ISS runs out of power available in the bank (because PMC Refl has increased so much the ISS has drained the bank to try to follow) it unlocks and relocks; upone relock it still sees PMC Trans being low so drives the AOM diffraction down until it again runs out of range and unlocks.  The cycle repeats until the mode competition stops and PMC Refl returns to its normal ~17.5W.  This happened 3 times in 5 hours last night, and was happening this morning when I arrived onsite.

Images attached to this comment
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 21:07, Tuesday 05 November 2024 (81089)
Lockloss

Lockloss @ 11/06 04:28 UTC after a 45 minute lock. Looks to be related to the ISS AOM driver again - at least I can't find anything that saw it before that (ndscope, bottom left)

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 20:17, Tuesday 05 November 2024 (81088)
Ops EVE Midshift Status

Currently Observing at 159Mpc and we have been locked for 30 minutes.

Relocking from the 00:58UTC lockloss took almost three hours due to issues with the PSL/IMC refusing to lock for many minutes at a time or the FSS locking but glitching/oscillating a ton, as well as issues with MICH not being well aligned. Our range was also very low so once we were relocked, I popped out of Observing to adjust the OPO temperature, which is something that Vicky told me I'd probably have to do. Just got done doing that and the range is looking MUCH better - we've gone from ~136 to almost 160Mpc since adjusting the temp, and DARM is looking much better in the higher frequencies.

H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 16:59, Tuesday 05 November 2024 - last comment - 09:21, Monday 11 November 2024(81084)
Lockloss

Lockloss @ 11/06 00:58UTC

Comments related to this report
oli.patane@LIGO.ORG - 17:09, Tuesday 05 November 2024 (81085)

This lockloss seems to have the AOM driver monitor glitching right before the lockloss(ndscope), similar to what I had noticed in the two toher locklosses from this past weekend(81037).

Images attached to this comment
oli.patane@LIGO.ORG - 19:51, Tuesday 05 November 2024 (81087)

03:45UTC Observing

camilla.compton@LIGO.ORG - 08:44, Monday 11 November 2024 (81189)Lockloss, PSL

In 81073 (Tuesday 05 November) Jason/Ryan S gave the ISS more range to stop it running out of range and going unstable. But on Tuesday evening, Oli saw this type of lockloss with again 81089. Not sure if we've seen it since then. 

The channel to check in the ~second before the lockloss is H1:PSL-ISS_AOM_DRIVER_MON_OUT_DQ, I added this to the lockloss trends shown by the 'lockloss' command line tool via /opt/rtcds/userapps/release/sys/h1/templates/locklossplots/psl_fss_imc.yaml

I checked all the "IMC" tagged locklosses since Wednesday 6th and didn't see any more of these.

jason.oberling@LIGO.ORG - 09:21, Monday 11 November 2024 (81190)

This happened before we fixed the NPRO mode hopping problem, which we did on Wednesday, Nov 6th.  Not seeing any more of these locklosses since lends credence to our suspicion that the ISS was not responsible for these locklosses, the NPRO mode hopping was (NPRO mode hops cause the power output from the PMC to drop, the ISS sees this drop and does its job by lowering the diffracted power %, once the diffracted power % hits 0 the ISS unlocks and/or goes unstable).

H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 16:39, Tuesday 05 November 2024 - last comment - 12:29, Thursday 07 November 2024(81080)
Installation of the AS power monitor in the AS_AIR camera enclosure

I've roughly copied the LLO configuration for the AS power monitor (that won't saturate after lock losses) and installed an additional photodiode in the AS_AIR camera enclosure. PD output goes to H1:PEM-CS_ADC_5_19_2K_OUT_DQ for now.

GiGE used to receive ~40ppm of the power coming into HAM6. I replaced the steering mirror in front of GigE with 90:10, the camera now receives ~36ppm and the beam going to the photodiode is ~4ppm. But I installed ND1.0 in front of the PD, so the actual power on PD is ~0.4ppm.

See the attached cartoon (1st attachment) and the picture (2nd attachment).

Details:

  1. Replaced the HR mirror with 90:10 splitter (BS1-1064-90-1025-45P), roughly kept the beam position on the camera, and installed a Thorlabs PDA520 (Si detector, ~0.3AW) with OD1.0 absorptive ND filter in the transmission. I set the gain of PDA520 to 0dB (transimpedance=10kOhm).
  2. Reflection of the GigE was hitting the mirror holder of 90:10, so I inserted a beam dump there. The beam dump is not blocking the forward-going beam at all (3rd attachment).
  3. Reflection of the ND filter hits the black side panel of the enclosure. I thought of using a magnetic base, but the enclosure material is non-magnetic. Angling the PD to steer the reflection into the beam dump for GigE reflection is possible but takes time (the PD is in a inconvenient location to see the beam spot).
  4. Fil made a custom cable to route power and signal through the unused DB9 feedthrough on the enclosure.  Pin 1 = Signal, Pin 6 = Signal GND, Pin 4 = +12V, Pin 5 = -12V, Pin 9 = power GND. (However, the power GND and signal GND are connected inside the PD.) All pins are isolated from the chamber/enclosure as the 1/2" metal post is isolated from the PD via a very short (~1/4"?) plastic adapter.
  5. Calibration of this channel (using ASC-AS_C_NSUM) seems to be about 48mW/Ct.
Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 17:11, Tuesday 05 November 2024 (81086)

This is the first look of the lock loss from 60W. At least the channel didn't saturate but we might need more headroom (it should rail at 32k counts).

The peak power is in this example is something like 670W. (I cannot figure out for now which AA filter and maybe decimation filter are in place for this channel, these things might be impacting the shape of the peak.)

Operators, please check if this channel rails after locklosses. If it does I have to change the ND filter.

Also, it would be nice if the lock loss tool automatically triggers a script to integrate the lock loss peak (which is yet to be written).

Images attached to this comment
ryan.crouch@LIGO.ORG - 09:20, Wednesday 06 November 2024 (81096)ISC, OpsInfo

Tagging Opsinfo

Also checking out the channel during the last 3 high power locklosses this morning (NLN, OMC_WHITENING, and MOVE_SPOTS). For the NLN lockloss, it peaked at ~16.3k cts 80ms after the IMC lost lock. Dropping from OMC_WHITENING only saw ~11.5k cts 100ms after ASC lost it. Dropping from MOVE_SPOTS saw a much higher reading (at the railing value?) of ~33k cts also ~100 ms after ASC and IMC lost lock.

Images attached to this comment
ryan.crouch@LIGO.ORG - 11:05, Wednesday 06 November 2024 (81102)

Camilla taking us down at 10W earlier this morning did not rail the new channel, it saw about ~21k cts.

Images attached to this comment
keita.kawabe@LIGO.ORG - 17:34, Wednesday 06 November 2024 (81111)

As for the filtering associated with the 2k DAQ, PEM seems to have a standard ISC AA but the most impactful filter is a 8x decimation filter (16k -> 2k). Erik told me that the same 8x filter is implemented as src/include/drv/daqLib.c: line 187 (bi-quad form) line 280 (not bi-quad) and one is mathematically transformed to the other and vice versa.

In the attached, it takes ~1.3ms for the step response of the decimation filter to reach its first unity point, which is not really great but OK for what we're observing as the lock loss peaks seem to be ~10ms FWHM. For now I say that it's not unreasonable to use this channel as is.

Images attached to this comment
keita.kawabe@LIGO.ORG - 12:29, Thursday 07 November 2024 (81112)

I added ND0.6, which will buy us about a factor of 4.

(I'd have used ND2.0 instead of ND1.0 plus ND0.6, but it turns out that Thorlabs ND2.0 is more transparent at 1um relative to 532nm than ND1.0 and ND0.6 are. Looking at their data, ND2.0 seems to transmit ~4 or 5%. ND 1.0 and ND 0.6 are closer to nominal optical density at 1um.)

New calibration for H1:PEM-CS_ASC_5_19_2K_OUT_DQ using ASC-AS_C_NSUM_OUT (after ND was increased to 1.0+0.4) is ~0.708/4.00~0.177W/counts.

Images attached to this comment
Displaying reports 361-380 of 79116.Go to page Start 15 16 17 18 19 20 21 22 23 End