Displaying reports 5561-5580 of 83256.Go to page Start 275 276 277 278 279 280 281 282 283 End
Reports until 13:19, Tuesday 10 September 2024
H1 AOS (ISC)
marc.pirello@LIGO.ORG - posted 13:19, Tuesday 10 September 2024 (80020)
SUSEX Ligo DAC 32 (LD32) testing at EX

Per WP12079 we started measurements to compare directly the 20 bit DAC driving the ESD and L3 to our LD32. 

Comparing ESD Driving Signals
Test 1 - LD32 bank 1 (ch 0-7) attached to 0-7 on the PEM ADC ; 20 bit DAC ESD signal attached to 8-15 on the PEM ADC

Comparing L3 Driving Signals
Test 2 - 20 bit DAC L3 attached to 0-7 PEM ADC : LD32 bank 2 (ch 8-15) attached to 8-15 on the PEM ADC

After analysis of the data, we measured about ~14us delay in the LD32 which is accounted for by the Anti Imaging filter on the FPGA.  We also determined the gain should be ~275.65 to achieve the same calibration between both DACs.

Driving the ESD and L3 with the LD32 will be postponed until next week.

daniel sigg, dave barker, marc pirello

H1 CDS (ISC, PEM)
filiberto.clara@LIGO.ORG - posted 12:23, Tuesday 10 September 2024 (80019)
ADC card for PEM installed in h1iscey

WP 12067
CDS Front-end IO Chassis As-Built Drawings D1301004

A new ADC was installed in h1iscey. Part of PEM upgrades. With h1iscey down, we did some troubleshooting of the Front End (FE) computer not booting with second Adnaco card connected to the FE, alog 62840.

Steps in debugging:

  1. The second Adnaco card was replaced in the IO chassis
  2. All ADC and DAC cards in Adnaco 3 were moved to Adnaco 2
  3. New PEM ADC installed in Adnaco 3, slot 1
  4. Fibers for Adnaco 2 reconnected at FE computer (slot 7)
  5. Computer would not reboot
  6. Fibers from Adnaco 4 (computer slot 4) disconnected and Adnaco 2 connected
  7. Computer booted
  8. Adnaco adapter card (computer slot 7) replaced
  9. Fibers for Adnaco 2 reconnected (computer slot 7)
  10. Computer would not boot

Issues with Slot 7 on the FE computer. Fibers rearranged at computer as follows:

  1. Computer slot 1 - Empty
  2. Computer slot 2 - Adnaco 1
  3. Computer slot 3 - Adnaco 2
  4. Computer slot 4 - Adnaco 3
  5. Computer slot 5 - Emtpy
  6. Computer slot 6 - RFM
  7. Computer slot 7 - Empty (BAD)

Adnaco 4 fibers left disconnected. Computer will be replaced.

D. Barker, F. Clara, E. Von Reis

H1 SQZ
camilla.compton@LIGO.ORG - posted 12:14, Tuesday 10 September 2024 - last comment - 19:33, Wednesday 11 September 2024(80010)
PSAMS adjustments for SQZ-OMC mode matching

Vicky, Camilla

Repeated 66946 PSAMs changes with SQZ-OMC scans with a new method of dithering the PZT around the TEM02 mode and minimizing it. With this we improved the mode mismatch from 4% to 3%. It will interesting to see if these settings are still better in full lock. Plots of OMC scan attached and same plot zoomed on peaks attached.

Took OMC scans using tempalte /sqz/h1/Templates/dtt/OMC_SCANS/Sept10_2024_PSAMS_OMC_scan_coldOM2.xml  Unlock OMC and H1:OMC-PZT2_OFFSET to -50 (nominal is-17) before starting scan.

Started with strategy 1: Lock OMC and Maximize TEM00 with PSAMS (alignment controlled with loops).
Changed to more sensitive strategy 2: Vicky put a dither on the OMC PZT around the TEM02 peak, awggui attached. Then minimize TEM02 peak with PSAMS (alignment controlled with loops).
 
ZM4/5 PSAMs TEM00 TEM02
Mismatch
(% of TEM02)
Notes Ref on Plot
5.5V/-0.8V 0.6362 0.02684 4.048% Starting 0 (pink)
4.0V/0.34V 0.6602 0.02332 3.412% Maximized TEM00 1 (blue)
2.1V/0.2V 0.6611

0.02097

3.074% Minimized TEM02 2 (green) This is 0V on the ZM4 PZT.
3.0V/0.85V 0.6609 0.02209 3.234% Minimized TEM02 3 (orange)
4.0V/0.34V N/A N/A N/A Minimized TEM02 Didn't scan, checked that we got similar results with different method of minimumizing TEM02 rather than maximizing TEM00
8.1V/-0.4V 0.6422 0.03432 5.073%
Minimized TEM02
4 (cyan)
Chose similar ZM4 settings to  what we found was good in full lock with cold OM2 in 76986

2.1V/-0.1V

0.6591 0.02162 3.176% Minimized TEM02 5 (red)

2.1V/0.2V

0.6580 0.01954 2.884% Back to best ref2 PSAMS values 6 (brown) LEAVING HERE

Over 1V and under -1V on ZM5 is bad at most ZM4 strains (tested at 4.0V ZM4). For each step we adjusted ZM4 and then fine adjusted ZM5.

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 14:58, Tuesday 10 September 2024 (80024)ISC, SQZ

Note OM2 is cold currently for this measurement (and since the vent it seems).

Images attached to this comment
camilla.compton@LIGO.ORG - 11:55, Wednesday 11 September 2024 (80045)

With the orginal PSAMs (5.5V/-0.8V) we had:

  • OMC locked 1-2 min, starting from 1410019031.
    • TRANS, DCPD_SUM_OUT = 0.65.
    • REFL, OMC-REFL_A_LF_OUT between (0.06,0.18)
  • OMC unlocked 30s, starting from 1410019336.
    • TRANS DCPD_SUM_OUT = 0.009.
    • OMC-REFL_A_LF_OUT between (0.86,0.99)

At the better PSAMS settings (2.1V/0.2V) plot attached:

  • OMC locked 1-2 min, starting from 1410027768.
    • TRANS DCPD_SUM_OUT = 0.67
    • OMC-REFL_A_LF_OUT between (0.036, 0.167)
  • OMC unlocked 1 min, starting from 1410028000.
    • TRANS DCPD_SUM_OUT = 0.0087
    • OMC-REFL_A_LF_OUT between (1.008, 0.87)
  • DARK OMC 1 min, starting from 1410028321.
    • TRANS DCPD_SUM_OUT = 0.0084
    • OMC-REFL_A_LF_OUT between (-0.02, -0.01)
Images attached to this comment
H1 IOO (OpsInfo)
ryan.short@LIGO.ORG - posted 12:14, Tuesday 10 September 2024 (80018)
Centered IMC WFS

I took the opportunity to center the IMC WFS this morning since we've been getting the notification on DIAG_MAIN for some time. The process was as follows:

  1. Offload IMC WFS by taking IMC_LOCK to 'MCWFS_OFFLOADED'
  2. Take IMC_LOCK to 'OFFLINE'
  3. Using the MCWFS centering scope and picomotors on the IMC overview screen, center the signal on each WFS (scope screenshot attached)
  4. Accept IMC PZT offset SDFs from the previous offloading (attached)
Images attached to this report
H1 CAL
louis.dartez@LIGO.ORG - posted 11:28, Tuesday 10 September 2024 - last comment - 14:05, Tuesday 10 September 2024(80017)
ETMX L1, L2, L3 COILOUTF and ESDOUTF gain adjustments for new 32-bit DACs
In preparation for WP 12079, I prepared ESDOUTF (L3) and COILOUTF (L1, L2) filters to make up the calibration difference between 18-bit and the new 32-bit DACs. This is similar to what was done in LHO:64274 to adjust from 18-bit DACs to 20-bit DACs. Similarly, I placed new '32bitDAC' gains in CALCS-ETMX. In CAL-CS, the new gains are intended to replace the gains of '4'. 

I have not yet loaded the filter coefficients, but they have been saved in the filter files. 
Comments related to this report
jenne.driggers@LIGO.ORG - 14:05, Tuesday 10 September 2024 (80022)

I loaded the filters earlier today, so they should be ready to go whenever we start using these new LIGODACs in-loop.

H1 TCS
thomas.shaffer@LIGO.ORG - posted 11:08, Tuesday 10 September 2024 (80015)
TCS chiller line inspection FAMIS and sock filter replacement

FAMIS27776

For the most part the chillers lines look good, nothing new showed up. To redocument a few items with pictures:


 

Both sock filters were replaced and gunk found in the water reservoirs.

I noticed the other month that the sock filters were starting to look green. I was hesitant to replace them while we were observing, so I waited for a maintenance day that I was available. Well, we should have replaced them sooner. The TCSY sock had gunk on the inside of the sock and the outside (attachment 9), TCSX was similar but not as bad (attachment 10). The big issue is that there was chunks of gunk suspended in the reservoirs (TCSY res. & TCSX). I'm not sure if this gunk is making it around our sock filters* or if it's just working its way through. The latter possible since I saw some gunk on the outside of the TCSY sock.

It looks like the last time that we replaced these filters was Aug 2023 (alog72253), so we should set up a famis to remind us to do this sooner. The manual recommends replacing them every 3-4 months or so. While I don't recall ever replacing them that frequently in the past, perhaps it's time to start.

*Years ago we decided that not pushing the sock's gasket fully into its seat and instead angling it toward the flow of the water, but allowing an air gap on the far side, gave us more accurate reservoir level readings and still had almost all of the water go into the sock.

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 08:37, Tuesday 10 September 2024 (80011)
Tue CP1 Fill

Tue Sep 10 08:12:19 2024 INFO: Fill completed in 12min 15secs

Gerardo confirmed a good fill curbside.

Images attached to this report
LHO General
ibrahim.abouelfettouh@LIGO.ORG - posted 07:36, Tuesday 10 September 2024 (80008)
OPS Day Shift Start

TITLE: 09/10 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Calibration
OUTGOING OPERATOR: TJ
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 3mph Gusts, 1mph 3min avg
    Primary useism: 0.01 μm/s
    Secondary useism: 0.08 μm/s
QUICK SUMMARY:

IFO is in NLN and CALIBRATION for automatic PEM Injections and In-Lock SUS Charge measurements, since we're locked (and have been for near 15 hours!)

Planning to head into Planned Tuesday Maintenance once those are done running, which should be at 8AM.

 

H1 CDS
jonathan.hanks@LIGO.ORG - posted 07:34, Tuesday 10 September 2024 - last comment - 08:46, Tuesday 10 September 2024(80007)
WP 12069 Testing the backup CDS/GC router
As per WP 12069 I will be switching to the backup CDS router.

After discussion with the operator we will do this at 8am localtime.  This will temporarily break the link between the CDS network and the rest of the world.  It will not interrupt the IFO or control room work that stays within the CDS network.

The procedure to do this switchover for the current setup is documented in LIGO-T2300212.
Comments related to this report
jonathan.hanks@LIGO.ORG - 08:09, Tuesday 10 September 2024 (80009)
This test is done.  I was a little nicer in my procedure than the DCC document.  I issued a poweroff command to the router instead of hitting the power button.

The router appears to be working.  I will leave it in this configuration.
david.barker@LIGO.ORG - 08:46, Tuesday 10 September 2024 (80012)

The temporary removal of offsite access has exposed a bug in my locklossalert code. After it lost connection to Twilio, it stopped updating even after the network was restored. FRS has been opened.

I restarted the locklossalert systemd service on cdslogin. I also restarted the alarms service to ensure it was able to send texts.

H1 CDS
erik.vonreis@LIGO.ORG - posted 07:19, Tuesday 10 September 2024 (80006)
Workstations updated

Workstations were updated and rebooted.  This was an OS packages updated.  Conda packages were not updated.

H1 General
ryan.crouch@LIGO.ORG - posted 22:00, Monday 09 September 2024 (80005)
OPS Monday EVE shift summary

TITLE: 09/10 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 149Mpc
INCOMING OPERATOR: TJ
SHIFT SUMMARY: We've been locked for just over 5 hours, the range has been just under 160, and the wind has calmed back down.

23:54 UTC Observing

 

Images attached to this report
H1 General
oli.patane@LIGO.ORG - posted 16:29, Monday 09 September 2024 (80000)
Ops Day Shift End

TITLE: 09/09 Day Shift: 1430-2330 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Lock Acquisition
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY: Currently relocking and at ENGAGE_ASC_FOR_FULL_IFO. One lockloss today (79991) but not much to report. Relocking is taking a while because of changes in the ISC_LOCK code and not because of IFO issues.
LOG:

14:30UTC Observing and Locked for 4:50 hours
15:31 Out of Observing for Commissioning
18:31 Back into Observing
20:01 Out of Observing due to SQZer unlock
20:12 Lockloss
    21:45 Lockloss from LASER_NOISE_SUPPRESSION (probably user error/people on floor)
    22:41 Lockloss from ADS_TO_CAMERAS 

Start Time System Name Location Lazer_Haz Task Time End
15:05 FAC Karen VacPrep, OptLab n Tech clean 15:20
15:26 FAC Karen MY n Dumping garbage 17:10
15:29 FAC AC company VPW n New AC unit install 19:40
16:01 FAC Kim MX n Tech clean 18:07
17:12 FAC Mitchell EX n Grabbing dust monitor pump 18:39
17:51   Betsy CER, OptLab n Grabbing stuff and looking for sand 17:58
19:07 FAC Tyler OptLab n Looking at sandy flow bench 19:13
20:50 SQZ Vicky, Naoki LVEA YES ISS alignment on SQZT0 21:50
H1 General
ryan.crouch@LIGO.ORG - posted 16:05, Monday 09 September 2024 - last comment - 16:57, Monday 09 September 2024(79999)
OPS Monday EVE shift start

TITLE: 09/09 Eve Shift: 2330-0500 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Lock Acquisition
OUTGOING OPERATOR: Oli
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 23mph Gusts, 16mph 3min avg
    Primary useism: 0.03 μm/s
    Secondary useism: 0.06 μm/s
QUICK SUMMARY:

Comments related to this report
ryan.crouch@LIGO.ORG - 16:57, Monday 09 September 2024 (80003)

Back to observing at 23:54 UTC

H1 ISC (SEI)
jim.warner@LIGO.ORG - posted 14:53, Monday 09 September 2024 - last comment - 08:50, Friday 13 September 2024(79998)
Tweaking IMC gain distribution to reduce IMC splitmon saturations during earthquakes

A while ago Oli found that some earthquakes cause IMC splitmon saturations, possibly causing locklosses. I asked Daniel this with respect to the tidal system to see if we could improve the offloading some. After some digging we found that some of the gains in the IMC get lowered to -32db at high power, greatly reducing the effective range of the IMC SPLITMON. He and Sheila decided that the best place to recover this gain was during the LASER_NOISE_SUPPRESSION state (575) so Sheila added some code to that state to redistribute some the gain (lines 5778 -5788): 

 

            self.gain_increase_counter = 0
        if self.counter ==5 and self.timer['wait']:
            if self.gain_increase_counter <7:  #icrease the imc fast gain by this many dB
                #redistribute gain in IMC servo so that we don't saturate splitmon in earth quakes, JW DS SED
                ezca['IMC-REFL_SERVO_IN1GAIN'] -= 1
                ezca['IMC-REFL_SERVO_IN2GAIN'] -= 1
                ezca['IMC-REFL_SERVO_FASTGAIN'] += 1
                time.sleep(0.1)
                self.gain_increase_counter +=1
            else:
                self.counter +=1

We tried running this, but an error in the code broke the lock. That's fixed now, the lines are commented out in ISC_LOCK and we'll try again in some other day.

Comments related to this report
jim.warner@LIGO.ORG - 16:46, Monday 09 September 2024 (80002)

This caused 2 locklosses, so it took a little digging to figure out what is happening. The idea is to increase H1:IMC-REFL_SERVO_FASTGAIN, to compensate for reducing H1:IMC-REFL_SERVO_IN1GAIN and H1:IMC-REFL_SERVO_IN2GAIN, all analog gains used in IMC/tidal controls. It turns out there is a decorator used in almost every state of IMC_LOCK that sets H1:IMC-REFL_SERVO_IN1GAIN to some value, so when ISC_LOCK changes all 3 of these gains, IMC_LOCK was coming in after and resetting FASTGAIN. This is shown in the attached trend,on the middle plot the IN1 and IN2 gains step down like they are supposed to, but the FASTGAIN does a sawtooth caused by two guardians controlling this gain. The decorator is called IMC_power_adjust_func() in ISC_library.py and is called as @ISC_library.IMC_power_adjust in IMC_LOCK. The decorator just looks at the value of the FASTGAIN, Daniel suggests that it would be best for this decorator to look at all of the gains and do this a little smarter. I think RyanS will look into this, but looks like redistributing gain in the IMC is not straightforward.

Images attached to this comment
brian.lantz@LIGO.ORG - 08:50, Friday 13 September 2024 (80074)

tagging this with SPI. This would be good to compare against. The SPI should reduce HAM2-3 motion and reduce IMC length changes coming directly from ground motion. If IMC drive is to match the arm length changes then it won't help. (unless we do some feedforward of the IMC control to the ISIs?)

H1 ISC
elenna.capote@LIGO.ORG - posted 14:19, Monday 09 September 2024 - last comment - 09:53, Tuesday 10 September 2024(79989)
PRCL offset reduces CHARD Y noise, changes HAM1 coupling

[Elenna, Gabriele]

I did another test of the PRCL offset today, mirroring previous tests (76814). The motivation for this test is first the large coherence of DARM with LSC REFL RIN, indicating that we are locked with some offset in PRCL. Next, there is large coherence of PRCL and CHARD Y to DARM, and previous tests have shown some PRCL is coupling through CHARD Y, and that coupling can be reduced by adding a digital offset to PRCL, which counteracts whatever locking offset is present in PRCL.

Once again, a PRCL offset is shown to reduce PRCL/REFL RIN coupling, and PRCL to CHARD Y coupling. The minimum occurs with an offset of about -58 (compare to April's best offset of -62). This coupling indicates an offset somewhere between 25 pm and 50 pm (quoting Gabriele 76810). The digital offset reduces REFL RIN/PRCL coupling by 40 dB. PRCL and LSC REFL RIN coupling plot

The offset reduces the noise in CHARD Y by reducing the PRCL coupling by a factor of 10. PRCL and CHARD Y coupling plot, and noise reduction in CHARD Y, CHARD Y noise

The PRCL to CHARD P coupling does not change, PRCL and CHARD P coupling plot (I was lazy and did not properly label this plot, but each trace is a different offset count similar to the other plots). However, the offset worsens the noise in CHARD P, because the offset effectively changes the amount of HAM1 noise coupling into CHARD P, CHARD P noise. We only checked CHARD P, but it likely changes the noise in INP1 P and PRC2 P as well. With this PRCL offset, I ran a HAM1 FF off time, so we could get data to retune the feedforward. (start: 1409940778, stop: 1409941096).

I think that with this PRCL offset, and with an updated feedforward, we will see some improvement in the low frequency noise. The last time we did this test, we did not check the effect on the feedforward, and this is possibly why the sensitivity did not get better, even though we are seeing the exact same reduction in CHARD Y noise as before.

Overall, we can convince ourselves that this result makes sense because the offset is changing the amount of 00 mode present at the REFL WFS.

We have made no changes yet, but I will fit new feedforward so that we can retest this offset + new HAM1 FF at the next commissioning time.

Images attached to this report
Comments related to this report
elenna.capote@LIGO.ORG - 14:37, Monday 09 September 2024 (79995)

Overall, the PRCL coupling to DARM did not change much with this offset, which contradicts the idea that most of the PRCL coupling to DARM is occuring through CHARD Y.

Images attached to this comment
elenna.capote@LIGO.ORG - 09:53, Tuesday 10 September 2024 (80014)

I have fit new HAM1 FF filters that can be used when the PRCL offset is engaged. I fit new filters for INP1 P, PRC2 P, and CHARD P. There is very little to subtract for the DC1 and DC2 loops, so I recommend those FFs are turned off when the PRCL offset is engaged. All the new filters are saved in FM10 of the filter banks, and are labeled "PRCL_offset".

H1 ISC
sheila.dwyer@LIGO.ORG - posted 13:39, Monday 09 September 2024 - last comment - 18:38, Monday 09 September 2024(79985)
offset and OFI temperature checks this morning

TJ, Vicky, Naoki, Sheila

Since the vent we haven't fully checked some of the usual offsets and settings that we might need to adjust, I started with some of them this morning. 

I started with stepping the BS camera pitch offset, which I'd started in 79663 but wasn't able to finish at that time because of a lockloss.  We gained a little more than 1 kW of circulating power in the arms by adjusting the offset from 233 to 230, see first attachment, this has been added and loaded in lscparams.  The improvement on POP A + B may be because of an alignment shift, we are probably clipping on both of those.

Went to no squeezing and ran OMC alingment offset scan: went to no squeezing and copied Jennie's dtt template from 78616.  This scan saturated the OMC pitch actuator, so I reduced the OMC ANG Y amplitude from 3 to 1 and restarted the scan at 15:53:07-16:16:29 UTC.  TJ and I noticed that the strange flashing behavoir of the OMC camera was coming and going while this scan was happening. 

Following up on 79719 I used the template sheila.dwyer/OMC/OMC_fringe_wrapping.xml and reduced the amplitude to 600 as Naoki did to adjust the OFI temperature. We found that the fringe wrapping shelf amplitude was reduced by increasing the temperature, we went up to 37.5C and the amplitude of the shelf dropped by a factor 3 (second screenshot attached).  We didn't go hotter than this because we weren't sure how hot we would want to operate.   This change in OFI temperature doesn't seem to have changed the optical gain, shown in the 3rd attachment.  (template in sheila.dwyer/OMC/OMC_fringe_wrapping_after_OFI_repair.xml, ref 38 25C, ref 39 28.5C, 40 29.5C, ref 41 30.5, ref 42 31.5C, ref 42 31.5C, ref 43 32.5C, ref 45 34.5C, ref 46 35.5C, ref 47 36.5C, ref 48 37.5C. )

Vicky, Naoki and I started some alignment scans for 4 squeezer DOF (ZM5+6 P+Y) with different frequencies (similar to the OMC offsets), at 17:10 UTC- 17:34:20 UTC. (We continued OFI temperature adjustments while this was happening). 

Images attached to this report
Comments related to this report
victoriaa.xu@LIGO.ORG - 18:38, Monday 09 September 2024 (80004)

Analyzing the OMC alignment offset scan, adapting Gabriele's and Jennie W's codes from lho78616.

Script here lives in /opt/rtcds/userapps/release/omc/h1/scripts/OMC_alignment_dither_check_qpd_offsets.*     Saved as both a script and as a jupyter notebook.

If running as a script, then the gps start & stop times around turning on/off the dithers can be input manually (lines 44-45), or on the command line, for example:

  • python OMC_alignment_dither_check_qpd_offsets.py 1409932335 1409933965
  • gps0 = start time of data pull, e.g. just before dither lines turn on
  • gps1 = end time of data pull, e.g. just after dither lines turn off

There are three main plot outputs:

  • Time series of dithers + darm + dcpd_sum
  • DCPD BLRMS vs. QPD offsets. Purple = kappaC.  Green = DCPDs @ PCAL line. These should and do track, which makes sense.
  • DARM BLRMS vs. QPD offsets. Purple = kappaC.
  • Dashed black line = original QPD offsets.  Solid red line = potentially better offsets (chosen by hand after looking at plots).

 

Current offsets are all 0. Based on today's dithers, could try the following offsets:

  • A_PIT = -0.15
  • A_YAW = +0.15
  • B_PIT = -0.15        (this plateau'd and didn't go through an optimum)
  • B_YAW = -0.08      (this plateau'd and didn't go through an optimum)
Images attached to this comment
Displaying reports 5561-5580 of 83256.Go to page Start 275 276 277 278 279 280 281 282 283 End