Displaying reports 261-280 of 79047.Go to page Start 10 11 12 13 14 15 16 17 18 End
Reports until 04:51, Friday 08 November 2024
H1 General
ryan.crouch@LIGO.ORG - posted 04:51, Friday 08 November 2024 - last comment - 05:31, Friday 08 November 2024(81137)
OPS OWL assistance

H1 called for help from the initial alignment timer expiring, we're getting hit by an earthquake (6.2 from Chile). It was only in GREEN_ARMS, I brought it to down for now, once the ground motion calms down I'll restart locking, secondary microseism is also increasing.

Comments related to this report
ryan.crouch@LIGO.ORG - 05:31, Friday 08 November 2024 (81138)

I've restarted locking after seeing greenarms hold for a few minutes.

H1 General
anthony.sanchez@LIGO.ORG - posted 22:00, Thursday 07 November 2024 (81136)
Thursday Eve Shift End

TITLE: 11/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 161Mpc
INCOMING OPERATOR: Ryan C
SHIFT SUMMARY:
IFO Locked for 8 Hours, No Superevent candidates
Everything is running smoothly.

LOG:
no log

 

H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 18:45, Thursday 07 November 2024 - last comment - 18:01, Friday 08 November 2024(81130)
Fast shutter bouncing happens with an inconvenient timing

Summary:

Attached shows a lockloss at around 11:31 PST today (tGPS~1415043099). It seems that the fast shutter, after it was shut, bounced down to momentarily unblock the AS beam at around the time the power peaked.

For this specific lock loss, the energy deposited into HAM6 was about 17J and the energy that got passed the fast shutter is estimated to be ~12J because of the bouncing.

Bouncing motion was known to exist for some time (e.g. alogs 79104 and 79397, the latter has an in-air slow-mo video showing the bouncing), it seems as if the self damping is not working. Could this be an electronics issue or mechanical adjustment issue or something else?

Also, if we ever open HAM6 again (before this fast shutter is decomissioned), it might be a good idea to make the shutter unit higher (shim?) so the beam is still blocked when the mirror reaches its lowest position while bouncing up and down.

Details:

The top panel shows the newly installed lockloss power monitor (blue) and ASC-AS_A_DC_NSUM which monitors the power downstream of the fast shutter (orange).

The shutter was triggered when the power was ~3W or so at around t~0.36 and the ASC-AS_C level drops by a factor of ~1E4 immediately (FS mirror spec is T<1000ppm, seems like it's ~100ppm in reality).

However, 50ms later at t~0.41 or so, the shutter bounced back down and stayed open for about 15ms.  Unfortunately this roughly coincided with the time when the power coming into HAM6 reached its maximum of ~760W.

Green is a rough projection of the power that went to OMC (aka "AS_A_DC_NSUM would have looked like this if it didn't rail" trace). This was made by simply multiplying the power mointor itself with  AS_A_DC_NSUM>0.1 (1 if true, 0 if false), ignoring the 2nd and 3rd and 4th bouncing.

All in all, for this specific lock loss, the energy coming to HAM6 was 16~17J, and the energy that got past FS was about 11~12J because the timing of the bounce VS the power. OMC seems to be protected by the PZT though, see the 2nd attachemt with wider time range,

The time scale of the lock loss spike itself doesn't seem that different from the L1 lock loss in LLO alog 73514 where the power coming to HAM6 peaked tens of ms after AS_A/B/C power appreciably increased.

OMC DCPDs might be OK since they didn't record crazy high current (though I have to say the IN1 should have been constantly railing once we started losing lock, which makes the assessment difficult), and since we've been running with bouncy FS and the DCPDs have been good so far. Nevertheless we need to study this more.

Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 12:29, Friday 08 November 2024 (81148)

Two lock losses, one from last night (1415099762, 2024-11-08 11:15:44 UTC, 03:15:44 PST) and another one that just happened (1415132263, 2024/11/08 20:17:25 UTC) look OK.

The shutter bounced ~50ms after the trigger but the power went down before that.

Images attached to this comment
keita.kawabe@LIGO.ORG - 18:01, Friday 08 November 2024 (81154)

Two more lock losses from today (1415141124, 2024-11-08 22:45:06 UTC and 1415145139, 2024-11-08 23:52:00 UTC) look OK.

In these plot, shutter open/close is judged by p(monitor)/p(AS_A) < some_threshold (open if true).

Images attached to this comment
H1 CAL (CAL)
richard.savage@LIGO.ORG - posted 18:20, Thursday 07 November 2024 (81135)
Tune up of Optical Follower Servo in Pcal Tx module in Pcal lab

DriptaB and RickS

Today we investigated the 20 dB gain deficit in the Optical Follower Servo (OFS).  We found that the loop offset being set at 7.0 V may have been the source of the issue.

We followed the procedure in LIGO-T1400486 to test the performance of the OFS board and everything seemed normal.

We centered the beam on the OFS PD in the transmitter module and used the REFL output beam on the PS4 (WSH) power standard to set the loop offset to 3.94 V (V2 on the HP power supply)which gave 300 mW in the REFL beam.

We then measured the OFS OLTF (Souce at CLTF TEST IN; PD MON / ERR MON) and set the gain at 7.60 V (V1 on the HP power supply) to give a UGF of 100 kHz.  The phase margin is about 55 deg, as expected.

The loop seems to be operating normally now.   Screenshot of the OLTF measurement attached.

The system was left with the covers on the two power senors in the responsivity ratio setup, the anodized aluminum beam block at the output of the Tx module, the two external shutters closed and the Remote/Local switch in the Local position.

Non-image files attached to this report
H1 General
anthony.sanchez@LIGO.ORG - posted 16:39, Thursday 07 November 2024 (81133)
Thursday Eve Shift start

TITLE: 11/08 Eve Shift: 0030-0600 UTC (1630-2200 PST), all times posted in UTC
STATE of H1: Observing at 160Mpc
OUTGOING OPERATOR: Corey
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 4mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.24 μm/s
QUICK SUMMARY:
IFO is Currently locked and OBSERVING for 2 hours and 45 minutes !
 

LHO General
corey.gray@LIGO.ORG - posted 16:37, Thursday 07 November 2024 (81121)
Thurs DAY Ops Summary

TITLE: 11/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
INCOMING OPERATOR: Tony
SHIFT SUMMARY:

H1's Returned To Join L1 & V1 For Some Triple Coincidence!

The day started with Daniel driving to EY to fix our ailing ALSy laser.  After that Camilla ran an H1 alignment and got H1 back to NLN for a bit for some Observing time, but then there was a lockloss.  After this tried locking, but DRMI/PRMI both looked ugly, so another alignment was needed (and this wasn't trivial since the Manual Alignment attempts did NOT do the trick and for a full Initial Alignment, SRY was too misaligned and required some "alignment by hand" with SR2 & SRM.  But after this, H1 returned to NLN and we've been there for over 2.5hrs (in mostly Observing for this time other than a 15min drop for Vicky to touch up the Squeezer).

OPERATOR NOTE:  If EX HEPI trips, call Jim!!  (this is due to low fluid levels near the trip level)
LOG:

H1 SUS
anthony.sanchez@LIGO.ORG - posted 16:22, Thursday 07 November 2024 (81132)
Weekly In-Lock SUS Charge Measurement

FAMIS 28376

ls /opt/rtcds/userapps/release/sus/common/scripts/quad/InLockChargeMeasurements/rec_LHO  -lt | head -n 6
total 511
-rw-r--r-- 1 test_user          controls   160 Oct  8 07:58 ETMX_12_Hz_1412434733.txt
-rw-r--r-- 1 test_user          controls   160 Oct  8 07:50 ETMY_12_Hz_1412434262.txt
-rw-r--r-- 1 test_user          controls   160 Oct  8 07:50 ITMY_15_Hz_1412434244.txt
-rw-r--r-- 1 test_user          controls   160 Oct  8 07:50 ITMX_13_Hz_1412434243.txt
-rw-r--r-- 1 test_user          controls   160 Sep 24 07:58 ETMX_12_Hz_1411225133.txt

The latest measurements were already documented in this alog, and the measurements were not run this week.

H1 SQZ
victoriaa.xu@LIGO.ORG - posted 15:57, Thursday 07 November 2024 (81131)
Popped out of observe for quick SQZ adjustment

Adjusted OPO temperature to maximize RF6 with SQZ_MANAGER paused and the guardian SQZ_ANG_ADJUST in "DOWN".

Then turn back on SQZ_ANG_ADJUST = "ADJUST_SQZ_ANG_SDF", and adjusted the squeeze angle servo setpoint (the ADF demod phase, H1:SQZ-ADF_OMC_TRANS_PHASE from -133 to -124, see accepted SDF) to maximize squeezing (increased ~ 1 dB squeezing at kHz) and range (trends, increased like 5-10 Mpc?).

Images attached to this report
H1 DetChar (DetChar)
ansel.neunzert@LIGO.ORG - posted 14:14, Thursday 07 November 2024 (81129)
Some clues regarding disturbance degrading Crab pulsar sensitivity

As reported in 79897, a bump has been apparent in the H1 spectrum in the vicinity of the Crab pulsar for much of O4b.

So far, it looks like:

The attached plots are all generated from Fscan weekly and monthly averaged data. The feature can be seen most clearly in these long-duration spectra.

There is a git issue tracking the problem here, with some additional discussion history and more plots: https://git.ligo.org/detchar/detchar-requests/-/issues/273

Images attached to this report
H1 SQZ (OpsInfo)
camilla.compton@LIGO.ORG - posted 11:55, Thursday 07 November 2024 (81127)
New notifcation in FDS: SQZ angle is out of range, request RESET_SQZ_ANG_FDS and then FREQ_DEP_SQZ

We've been seeing the SQZ_ANG_ADJUST servo run away at the start of a lock (maybe especially if the IFO is cold) e.g. attached today before Vicky reset it and over the weekend 81030.

We added a notification to SQZ_MANAGER FREQ_DEP_SQZ to return  False (kick us out of observing as range will be terrible) and notify 'SQZ angle is out of range, request RESET_SQZ_ANG_FDS and then FREQ_DEP_SQZ' if the angle H1:SQZ-CLF_REFL_RF6_PHASE_PHASEDEG is outside of 100 to 250deg.

To solve this, request RESET_SQZ_ANG_FDS and then FREQ_DEP_SQZ. This should turn off the SQZ_ADJUST_ANG servo, reset it to a normal value, wait ~60 seconds for the ASC to move and then return True so you cn go back to FREQ_DEP_SQZ.

Images attached to this report
H1 General (Lockloss)
oli.patane@LIGO.ORG - posted 11:54, Thursday 07 November 2024 (81128)
Lockloss

Lockloss @ 11/07 19:31UTC, possibly due to commissioning activities. Doesn't look like anything in PSL/IMC land(ndscope1). We had some glitches in DARM the second before the lockloss, and actuation to the top mass of PRM had just been started a few seconds before, so it's possible that that was the reason(ndscope2).

Images attached to this report
LHO VE
david.barker@LIGO.ORG - posted 11:02, Thursday 07 November 2024 (81126)
Thu CP1 Fill

Thu Nov 07 10:03:03 2024 INFO: Fill completed in 3min 1secs

Travis confirmed a good fill curbside.

Images attached to this report
LHO General
corey.gray@LIGO.ORG - posted 07:42, Thursday 07 November 2024 - last comment - 10:41, Thursday 07 November 2024(81120)
Thurs DAY Ops Transition

TITLE: 11/07 Day Shift: 1530-0030 UTC (0730-1630 PST), all times posted in UTC
STATE of H1: Aligning
OUTGOING OPERATOR: Ibrahim
CURRENT ENVIRONMENT:
    SEI_ENV state: CALM
    Wind: 5mph Gusts, 3mph 3min avg
    Primary useism: 0.02 μm/s
    Secondary useism: 0.21 μm/s
QUICK SUMMARY:

H1's not made it back to NLN over 24 hrs (with a low duty cycle overall this week[s]).  A quick glance shows ALSy is not stable as seen yesterday afternoon.  Just chatted with Sheila and she suggested checking stability of the PMC overnight after the PSL NPRO Freq temperature change from yesterday.

Microseism did increase a bit in the last 24hrs, but has also come back down to where it was about 18hrs ago; very low winds.

The Ops Work Station (cdsws29) had a bad "left" monitor (started flickering yesterday)---Jonathan is replacing it now.

Comments related to this report
camilla.compton@LIGO.ORG - 10:16, Thursday 07 November 2024 (81124)

After Daniel got back, Corey locked the Green arms fine and started locking. We couldn't catch PRMI even after CHECK_MICH_FRINGES so I started an initial alignment at 17:13UTC. I had to go though DOWN INIT twice, as the first time the  PMC input power wasn't reduced to 2W stayed at 10W, maybe as we went though CHECK_SDF).

After INITAL_ALIGNMENT, locking has been fully automated. We lost lock at ENGAGE_AS_FOR_FULL_IFO once and the second time got past this step and are currently at POWER_10W.

oli.patane@LIGO.ORG - 10:41, Thursday 07 November 2024 (81125)
Images attached to this comment
H1 PSL
jason.oberling@LIGO.ORG - posted 07:13, Thursday 07 November 2024 (81119)
PMC Output Since NPRO Crystal Temperature Tuning

Attached is a roughly 19 hour trend, going back to relocking the PMC/ISS/FSS after yesterday's NPRO crystal temperature tuning.  As can be seen, there were no excursions in PMC Refl above 18 W, and no sudden drops in the ISS diffracted power % or sudden ISS unlocks, a clear change from the last several days.  We will continue to monitor this, but so far it looks like the new NPRO is much happier with this new crystal temperature.

Images attached to this report
H1 ISC
daniel.sigg@LIGO.ORG - posted 00:32, Thursday 07 November 2024 - last comment - 09:15, Thursday 07 November 2024(81117)
ALS Y laser unstable

The laser in EY looks unstable. It may be multimode.

The first attached plot shows the ALSY locking attampts over the past day. The laser was adjjusted to the new PSL frequency about 10 hours ago. Afterwards there is a distinct variation in the green output accompanied by a smaller inverse change of the red power. In the past 6 hours good ALS locks (red trace near +1.0) are correlated with the lower green power states. At higher green p[owers we seem to be locking a intermediate transmitted powers indicating the possibility of a multimode laser.

The second attached plot shows how changing the frequency of the laser changes the green and red powers. The green power varies much between -600 and +400MHz and seems to be stable outside this region.

We need to try adjusting the pump diode current and crystal temperature and see if we can resolve it this way.

Images attached to this report
Comments related to this report
daniel.sigg@LIGO.ORG - 09:15, Thursday 07 November 2024 (81123)

Went to EY and adjusted the laser diode current and temperature.  Hopefully, we are now in a more stable region of operation.

current as found: 1.600
doubler as found: 33.64

new current 1.511
new temp 30.00
new doubler 34.20

This resulted in a lower green power output. Correspondingly, the normalization of  H1:ALS-Y_LASER_GR_DC, H1:ALS-C_TRY_A_LF, and H1:ALS-C_TRY_A_DC_POWER were lowered by a factor of 1.4. 

The nominal laser diode power was also updated for H1:ALS-Y_LASER_HEAD.

The EY controller and/or laser is still somewhat flaky when trying to adjust the temperature. If this doesn't work, we may have to consider swapping in a spare.

Images attached to this comment
H1 TCS
oli.patane@LIGO.ORG - posted 14:01, Wednesday 06 November 2024 - last comment - 08:38, Tuesday 12 November 2024(81106)
TCS Monthly Trends FAMIS

Closes FAMIS#28454, last checked 80607

CO2 trends looking good (ndscope1)

HWS trends looking good (ndscope2)

You can see in the trends when the ITMY laser was swapped about 15-16 days ago.

Images attached to this report
Comments related to this report
camilla.compton@LIGO.ORG - 08:39, Thursday 07 November 2024 (81122)

Trend shows that the ITMY HWS code stopped running. I restarted it.

camilla.compton@LIGO.ORG - 08:38, Tuesday 12 November 2024 (81220)CDS

Erik, Camilla. We've been seeing that the code running on h1hwsmsr1 (ITMY) kept stopping after ~1hours with a "Fatal IO error 25" (Erik said related to a display) attached.

We checked that memory is fine of h1hwsmsr1. Erik troubleshooted this back to matplotlib trying to make a plot and failing as there was no display to make the plot on. State3.py calls get_wf_new_center() from hws_gradtools.py which calls get_extrema_from_gradients() which makes a contour plot, it's trying to make this plot and thinks there's a display but then can't plot it.  This error isn't happening on h1hwsmsr (ITMX).   I had ssh'ed into h1hwsmsr1 using -Y -C options (allowing the stream image to show), but Erik found this was making the session think there was a display when there wasn't.

Fix: quitting tmux session, logging in without options (ssh controls@h1hwsmsr1),  and starting code again. The code has now been running fine for the last 18 hours.

Images attached to this comment
H1 SEI
filiberto.clara@LIGO.ORG - posted 09:12, Wednesday 06 November 2024 - last comment - 08:36, Friday 08 November 2024(81097)
3T Gurulp seismometers Huddle Test - LVEA

WP 12139

Entry for work done on 11/5/2024

Two 3T seismometers were installed in the LVEA Biergarten next to the PEM area. Signals are routed through the SUS-R3 PEM patch panel into the CER. Signals are connected to PEM AA chassis 4 and 5.

F. Clara, J. Warner

Comments related to this report
jim.warner@LIGO.ORG - 15:12, Wednesday 06 November 2024 (81103)

There are 2 of these plugged in, they are 3 axis seismometers, serial numbers T3611670 and T3611672. The first one is plugged into ports 4,5 & 6 on the PEM patch panel, the second is plugged into ports 7,8  & 9. In the CER, T3611670 is plugged into ports 21,22 & 23 on PEM ADC5 and T3611672 is plugged into ports 27,28 & 29 on PEM ADC4. In the DAQ, these channels are H1:PEM-CS_ADC_5_{20,21,22}_2K_OUT_DQ and H1:PEM-CS_ADC_4_{26,27,28}_2K_OUT_DQ. So far the seismometers look like they are giving pretty good data, similar to the STS and the old PEM Guralp in the biergarten. The seismometers are oriented so that the "north" marking on the their carry handles is pointed down the X-arm, as best as I could with eyeballing it. 

I need to figure out the calibrations, but it looks like there is almost exactly -15db difference between these new sensors and the old PEM Guralp, but maybe the signal chain isn't exatly the same.

Attached images compare the 3T's to the ITMY STS and the existing PEM Guralp in the biergarten. First image compares asds for each seismometer. Shapes are pretty similar below 40 hz, but above that they all have very different responses.  I don't know what the PEM guralp is calibrated to, if anything, it looks ~10x lower than the STS (which calibrated to nm/s). The 3T's are about 5x lower than the PEM sensor, so ~50x lower than the STS.

Second image are tfs for the X,Y & Z dofs between the 3T's and the STS. These are just passive tfs between the STS and 3T's to see if the have similar response to ground motion These are generally pretty flat between .1 and 10hz. The X & Y dofs seem pretty consistent, the Z tfs are different starting around 10hz. I should go and check that the feet are locked and have similar extension.

Third image are tfs between the 3T's and the exist PEM Guralp. Pretty similar to the tfs with the STS, horizontal dofs all look very similar, flat between .1 and 10hz, but the ADC4 sensor has a different vertical response. 

I'll look at noise floors next.

 

 

Images attached to this comment
jim.warner@LIGO.ORG - 17:06, Thursday 07 November 2024 (81134)

The noise for these seems almost comparable to T240s, above 100 mhz, less certain about noise below 100mhz, these don't have thermal enclosures like the other ground sensors. Using mccs2 in matlab to remove all the coherent noise with the STS  and PEM Guralp, the residual noise is pretty close to the T240 spec noise in SEI_sensor_noise. Attached plots are the asds and residuals after finding a scale factor that matches the 3T asds to the calibrated ITMY STS asds. Solid lines are the 3T asd, dashed lines are the residuals after coherent subtraction. 

Images attached to this comment
brian.lantz@LIGO.ORG - 08:36, Friday 08 November 2024 (81141)

For convenience I've attachd the response of the T240 and the STS-2 from the manuals.

These instruments both have a steep fall-off above 50-60 Hz.
This is not compensated in the realtime filters, as it would just add lots of noise at high frequency, and then we'd have to roll it off again so it doesn't add lots of nasty noise.

T240 user guide - pg 45
https://dcc.ligo.org/LIGO-E1500379
The T240 response is pretty flat up to 10 Hz, has a peak at ~ 50 Hz, then falls off rapidly.
 

STS-2 manual - pg 7
https://dcc.ligo.org/LIGO-E2300142
Likewise the STS-2 response is pretty flat up to 10 Hz, then there is ripple, and a steep falloff above 60 Hz

Images attached to this comment
H1 ISC (ISC)
keita.kawabe@LIGO.ORG - posted 16:39, Tuesday 05 November 2024 - last comment - 12:29, Thursday 07 November 2024(81080)
Installation of the AS power monitor in the AS_AIR camera enclosure

I've roughly copied the LLO configuration for the AS power monitor (that won't saturate after lock losses) and installed an additional photodiode in the AS_AIR camera enclosure. PD output goes to H1:PEM-CS_ADC_5_19_2K_OUT_DQ for now.

GiGE used to receive ~40ppm of the power coming into HAM6. I replaced the steering mirror in front of GigE with 90:10, the camera now receives ~36ppm and the beam going to the photodiode is ~4ppm. But I installed ND1.0 in front of the PD, so the actual power on PD is ~0.4ppm.

See the attached cartoon (1st attachment) and the picture (2nd attachment).

Details:

  1. Replaced the HR mirror with 90:10 splitter (BS1-1064-90-1025-45P), roughly kept the beam position on the camera, and installed a Thorlabs PDA520 (Si detector, ~0.3AW) with OD1.0 absorptive ND filter in the transmission. I set the gain of PDA520 to 0dB (transimpedance=10kOhm).
  2. Reflection of the GigE was hitting the mirror holder of 90:10, so I inserted a beam dump there. The beam dump is not blocking the forward-going beam at all (3rd attachment).
  3. Reflection of the ND filter hits the black side panel of the enclosure. I thought of using a magnetic base, but the enclosure material is non-magnetic. Angling the PD to steer the reflection into the beam dump for GigE reflection is possible but takes time (the PD is in a inconvenient location to see the beam spot).
  4. Fil made a custom cable to route power and signal through the unused DB9 feedthrough on the enclosure.  Pin 1 = Signal, Pin 6 = Signal GND, Pin 4 = +12V, Pin 5 = -12V, Pin 9 = power GND. (However, the power GND and signal GND are connected inside the PD.) All pins are isolated from the chamber/enclosure as the 1/2" metal post is isolated from the PD via a very short (~1/4"?) plastic adapter.
  5. Calibration of this channel (using ASC-AS_C_NSUM) seems to be about 48mW/Ct.
Images attached to this report
Comments related to this report
keita.kawabe@LIGO.ORG - 17:11, Tuesday 05 November 2024 (81086)

This is the first look of the lock loss from 60W. At least the channel didn't saturate but we might need more headroom (it should rail at 32k counts).

The peak power is in this example is something like 670W. (I cannot figure out for now which AA filter and maybe decimation filter are in place for this channel, these things might be impacting the shape of the peak.)

Operators, please check if this channel rails after locklosses. If it does I have to change the ND filter.

Also, it would be nice if the lock loss tool automatically triggers a script to integrate the lock loss peak (which is yet to be written).

Images attached to this comment
ryan.crouch@LIGO.ORG - 09:20, Wednesday 06 November 2024 (81096)ISC, OpsInfo

Tagging Opsinfo

Also checking out the channel during the last 3 high power locklosses this morning (NLN, OMC_WHITENING, and MOVE_SPOTS). For the NLN lockloss, it peaked at ~16.3k cts 80ms after the IMC lost lock. Dropping from OMC_WHITENING only saw ~11.5k cts 100ms after ASC lost it. Dropping from MOVE_SPOTS saw a much higher reading (at the railing value?) of ~33k cts also ~100 ms after ASC and IMC lost lock.

Images attached to this comment
ryan.crouch@LIGO.ORG - 11:05, Wednesday 06 November 2024 (81102)

Camilla taking us down at 10W earlier this morning did not rail the new channel, it saw about ~21k cts.

Images attached to this comment
keita.kawabe@LIGO.ORG - 17:34, Wednesday 06 November 2024 (81111)

As for the filtering associated with the 2k DAQ, PEM seems to have a standard ISC AA but the most impactful filter is a 8x decimation filter (16k -> 2k). Erik told me that the same 8x filter is implemented as src/include/drv/daqLib.c: line 187 (bi-quad form) line 280 (not bi-quad) and one is mathematically transformed to the other and vice versa.

In the attached, it takes ~1.3ms for the step response of the decimation filter to reach its first unity point, which is not really great but OK for what we're observing as the lock loss peaks seem to be ~10ms FWHM. For now I say that it's not unreasonable to use this channel as is.

Images attached to this comment
keita.kawabe@LIGO.ORG - 12:29, Thursday 07 November 2024 (81112)

I added ND0.6, which will buy us about a factor of 4.

(I'd have used ND2.0 instead of ND1.0 plus ND0.6, but it turns out that Thorlabs ND2.0 is more transparent at 1um relative to 532nm than ND1.0 and ND0.6 are. Looking at their data, ND2.0 seems to transmit ~4 or 5%. ND 1.0 and ND 0.6 are closer to nominal optical density at 1um.)

New calibration for H1:PEM-CS_ASC_5_19_2K_OUT_DQ using ASC-AS_C_NSUM_OUT (after ND was increased to 1.0+0.4) is ~0.708/4.00~0.177W/counts.

Images attached to this comment
Displaying reports 261-280 of 79047.Go to page Start 10 11 12 13 14 15 16 17 18 End