Next: cw-sensitivity, Previous: cw-metric-template-banks, Up: Directory Index [Contents][Index]
Return computing-cost functions for use in OptimalSolution4StackSlide_v2()
to compute
optimal StackSlide setup for a binary-CW searches (freq, Period, asini, tAsc, ecc, argp)
assuming the "long segment regime" where Tseg >> P
struct of computing-cost functions to pass to OptimalSolution4StackSlide_v2()
freqRange
[min, max] of search frequency range [100, 300]
asiniRange
[min, max] of a*sini/c search range (default: [0.90000, 1.98000])
tAscRange
[min, max] of time of ascensino search range (default: [897753694.073760 897754294.073760])
PeriodRange
[min, max] of Period search range (default: [68023.5753600000, 68023.8345600000]
eccRange
[min, max] of eccentricity search range (default: [0, 0]
argpRange
[min, max] of argument(periapse) search range (default: [0, 0])
detectors
CSV list of detectors to use ("H1"=Hanford, "L1"=Livingston, "V1"=Virgo, …)
coh_duty
duty cycle of data within each coherent segment
resampling
use F-statistic resampling instead of ’demod’ timings for coherent cost [default: false]
lattice
template-bank lattice ("Zn", "Ans",..) [default: "Ans"]
coh_c0_demod
computational cost of F-statistic ’demod’ per template per second [optional]
coh_c0_resamp
computational cost of F-statistic resampling per template [optional]
inc_c0
computational cost of incoherent step per template per segment [optional]
grid_interpolation
use interpolating StackSlide or non-interpolating (ie coherent-grids == incoherent-grid)
UnitsConstants; refParams.Nseg = 40; refParams.Tseg = 8.0 * DAYS; refParams.mCoh = 0.5; refParams.mInc = 0.5; Tsft = 240; costFuns = CostFunctionsBinary ( ... "freqRange", [20, 430], "detectors", "H1,L1", "resampling", false, ... "coh_c0_demod", 4e-8 / Tsft, ... "inc_c0", 5e-9, ... "lattice", "Ans" ... ); cost0 = 12 * EM2014; TobsMax = 360 * DAYS; TsegMax = 10 * DAYS; sol_v2 = OptimalSolution4StackSlide_v2 ( "costFuns", costFuns, "cost0", cost0, "TobsMax", TobsMax, "TsegMax", TsegMax, "stackparamsGuess", refParams, "maxiter", 3 ); tol = -1e-2; assert ( sol_v2.Nseg, 36, tol ); assert ( sol_v2.Tseg, 864000, tol ); assert ( sol_v2.mCoh, 0.800740, tol ); assert ( sol_v2.mInc, 0.043971, tol );
UnitsConstants; refParams.Nseg = 40; refParams.Tseg = 8.0 * DAYS; refParams.mCoh = 0.5; refParams.mInc = 0.5; costFuns = CostFunctionsBinary ( ... "freqRange", [20, 630], "detectors", "H1,L1", "resampling", true, ... "coh_c0_resamp", 3e-7, "inc_c0", 5e-9, ... "lattice", "Ans" ... ); cost0 = 12 * EM2014; TobsMax = 360 * DAYS; TsegMax = 10 * DAYS; sol_v2 = OptimalSolution4StackSlide_v2 ( "costFuns", costFuns, "cost0", cost0, "TobsMax", TobsMax, "TsegMax", TsegMax, "stackparamsGuess", refParams, "maxiter", 3 ); tol = -1e-2; assert ( sol_v2.mCoh, 0.038235, tol ); assert ( sol_v2.mInc, 0.030999, tol ); assert ( sol_v2.Nseg, 36, tol ); assert ( sol_v2.Tseg, 864000, tol );
UnitsConstants; refParams.Nseg = 40; refParams.Tseg = 8.0 * DAYS; refParams.mCoh = 0.5; refParams.mInc = 0.5; costFuns = CostFunctionsBinary ( ... "freqRange", [20, 200], "eccRange", [0, 0.087], "detectors", "H1,L1", "resampling", true, ... "coh_c0_resamp", 3e-7, "inc_c0", 5e-9, ... "lattice", "Ans" ... ); cost0 = 12 * EM2014; TobsMax = 360 * DAYS; TsegMax = 10 * DAYS; sol_v2 = OptimalSolution4StackSlide_v2 ( "costFuns", costFuns, "cost0", cost0, "TobsMax", TobsMax, "TsegMax", TsegMax, "stackparamsGuess", refParams ); tol = -1e-2; assert ( sol_v2.mCoh, 0.81274, tol ); assert ( sol_v2.mInc, 0.30675, tol ); assert ( sol_v2.Nseg, 14.804, tol ); assert ( sol_v2.Tseg, 864000, tol );
Return computing-cost functions for use in OptimalSolution4StackSlide_v2()
to compute
optimal StackSlide setup for a directed search (known sky-position, unknown f, fdot, f2dot, ...)
Adapted from metricComputingCost()
function initially used in S6CasA E\s''earch setup
struct of computing-cost functions to pass to OptimalSolution4StackSlide_v2()
tau_min
spindown-age ’tau’ in seconds [default: 300 yrs]
brk_min
(minimal) braking index ’n0’ for spindown-bounds [default: 2]
fmin
lower search frequency bound [default: 50.00]
fmax
upper search frequency bound [default: 50.05]
boundaryType
what type of parameter-space boundary to assume [default: EaHCasA
]:
EaHCasA
for a freq-dependent ’box’ in {f1dot,f2dot}, defined by (tau_min, brk_min)
S5CasA
for Karl’s CasA search construction, with brk-index in [2,7]
detectors
CSV list of detectors to use ("H1"=Hanford, "L1"=Livingston, "V1"=Virgo, ...)
coh_duty
duty cycle of data within each coherent segment
resampling
use F-statistic resampling instead of ’demod’ timings for coherent cost [default: false]
lattice
template-bank lattice ("Zn", "Ans",..) [default: "Ans"]
coh_c0_demod
computational cost of F-statistic ’demod’ per template per second [optional]
coh_c0_resamp
computational cost of F-statistic resampling per template [optional]
inc_c0
computational cost of incoherent step per template per segment [optional]
grid_interpolation
use interpolating StackSlide or non-interpolating (ie coherent-grids == incoherent-grid)
UnitsConstants; refParams.Nseg = 32; refParams.Tseg = 8.0 * 86400; refParams.mCoh = 0.12; refParams.mInc = 0.41; costFuns = CostFunctionsDirected( ... "fmin", 120, ... "fmax", 1000, ... "tau_min", 300 * YRSID_SI, ... "detectors", "H1,L1", "coh_duty", 0.53375, ... "resampling", false, ... "coh_c0_demod", 7.4e-8 / 1800, ... "inc_c0", 4.7e-9, ... "lattice", "Zn", ... "boundaryType", "EaHCasA" ... ); cost0 = 3.1451 * EM2014; TobsMax = 256.49 * DAYS; sol_v2 = OptimalSolution4StackSlide_v2 ( "costFuns", costFuns, "cost0", cost0, "TobsMax", TobsMax, "stackparamsGuess", refParams ); tol = -1e-2; assert ( sol_v2.mCoh, 0.11892, tol ); assert ( sol_v2.mInc, 0.40691, tol ); assert ( sol_v2.Nseg, 32.075, tol ); assert ( sol_v2.Tseg, 6.9091e+05, tol );
UnitsConstants; refParams.Nseg = 100; refParams.Tseg = 86400; refParams.mCoh = 0.5; refParams.mInc = 0.5; costFuns = CostFunctionsDirected( ... "fmin", 100, ... "fmax", 300, ... "tau_min", 300 * YRSID_SI, ... "detectors", "H1,L1", "coh_duty", 0.7, ... "resampling", false, ... "coh_c0_demod", 7e-8 / 1800, ... "inc_c0", 6e-9, ... "lattice", "Ans", ... "boundaryType", "S5CasA" ... ); cost0 = 472 * DAYS; TobsMax = 365 * DAYS; sol_v2 = OptimalSolution4StackSlide_v2 ( "costFuns", costFuns, "cost0", cost0, "TobsMax", TobsMax, "stackparamsGuess", refParams, "sensApprox", "WSG" ); tol = -1e-2; assert ( sol_v2.mCoh, 0.19219, tol ); assert ( sol_v2.mInc, 0.25250, tol ); assert ( sol_v2.Nseg, 57.035, tol ); assert ( sol_v2.Tseg, 2.1643e+05, tol );
Return computing-cost functions used by OptimalSolution4StackSlide_v2()
to compute optimal Einstein@Home search setups for the GCT code.
Used to compute the E@H S5GC1 solution given in Prix&Shaltev,PRD85,
084010(2012) Table~II.
struct of computing-cost functions to pass to OptimalSolution4StackSlide_v2()
fracSky
fraction of sky covered by search
fmin
minimum frequency covered by search (in Hz)
fmax
maximum frequency covered by search (in Hz)
tau_min
minimum spindown age, determines spindown ranges
detectors
CSV list of detectors to use ("H1"=Hanford, "L1"=Livingston, "V1"=Virgo, ...)
coh_duty
duty cycle of data within each coherent segment
resampling
use F-statistic resampling instead of ’demod’ for coherent cost [default: false]
lattice
template-bank lattice ("Zn", "Ans",..) [default: "Zn"]
coh_c0_demod
computational cost of F-statistic ’demod’ per template per second [optional]
coh_c0_resamp
computational cost of F-statistic resampling per template [optional]
inc_c0
computational cost of incoherent step per template per segment [optional]
grid_interpolation
whether to use interpolating or non-interpolating StackSlide (ie coherent-grids == incoherent-grid)
refParams.Nseg = 205; refParams.Tseg = 25 * 3600; ## 25(!) hours refParams.mCoh = 0.5; refParams.mInc = 0.5; costFuns = CostFunctionsEaHGCT( ... "fracSky", 1/3, ... "fmin", 50, ... "fmax", 50.05, ... "tau_min", 600 * 365 * 86400, ... "detectors", "H1,L1", ... "resampling", false, ... "coh_c0_demod", 7e-8 / 1800, ... "inc_c0", 6e-9 ... ); [ costCoh, costInc ] = costFuns.f(refParams.Nseg, refParams.Tseg, refParams.mCoh, refParams.mInc ); cost0 = costCoh + costInc; TobsMax = 365 * 86400; sol_v2 = OptimalSolution4StackSlide_v2 ( "costFuns", costFuns, "cost0", cost0, "TobsMax", TobsMax, "TsegMin", 3600, "stackparamsGuess", refParams ); tol = -1e-3; assert ( sol_v2.mCoh, 0.14458, tol ); assert ( sol_v2.mInc, 0.16639, tol ); assert ( sol_v2.Nseg, 527.86, tol ); assert ( sol_v2.Tseg, 5.9743e+04, tol );
Cost function for lalapps_Weave
for use with OptimalSolution4StackSlide_v2
cost functions struct
EITHER
setup_file
Weave setup file
OR
detectors
Comma-separated list of detectors
ref_time
GPS reference time
start_time
GPS start time
semi_Tspan
Total time span of semicoherent search
EITHER
result_file
Weave result file
OR
sky_area
Area of sky to cover (4*pi = entire sky)
freq_min/max
Minimum/maximum frequency range
f1dot_min/max
Minimum/maximum 1st spindown
f2dot_min/max
Minimum/maximum 2nd spindown (optional)
Fmethod
F-statistic method used by search
stats
Comma-separated list of statistics being computed
lattice
Type of lattice to use (default: Ans)
timings
the name of a Weave result file to read fundamental timing constants from, or else the string "default" to use default timings
grid_interpolation
If true, compute cost of interpolating search (i.e. semicoherent grid interpolates results on coherent grids) If false, compute cost of noninterpolating search (i.e. identical coherent and semicoherent grids)
TSFT
Length of an SFT (default: 1800s)
function to compute the ’critical’ non-centrality parameter required to obtain exactly pFD false-dismissal probability at given pFA false-alarm probability for a chi^2 distribution with ’4*Nseg degrees of freedrom, i.e. the solution noncent to the equations
pFA = prob ( S > Sth | noncent=0 ) –> Sth(pFA) pFD = prob ( S < Sth(pFA) | noncent )
at given {pFA, pFD}, and S ~ chi^2_(4Nseg) ( noncent ) is a chi^2-distributed statistic with ’4Nseg’ degrees of freedrom and non-centrality noncent.
the optional argument approx allows to control the level of approximation:
approx
== "none":
use full chi^2_(4*Nseg) distribution
approx
== "Gauss":
use the Gaussian (N>>1) approximation
approx
== "WSG":
return w=1 for the "weak-signal Gaussian" case
## compare with example cases in Prix&Shaltev,PRD85,084010(2012) pFA = 1e-10; pFD = 0.1; rhoF2 = CriticalNoncentralityStackSlide ( pFA, pFD, 1 ); assert ( sqrt(rhoF2), 8.35, -1e-2 ); Nseg = 139; rhoS2 = CriticalNoncentralityStackSlide ( pFA, pFD, Nseg ); assert ( sqrt(rhoS2), 17.3, -1e-2 );
Compute local power-law coefficients fit to given computing-cost function cost_fun at StackSlide parameters Nseg and Tseg = Tobs/Nseg, and mismatch parameters mCoh and mInc.
The computing-cost struct cost_fun must be of the form
lattice
string defining template-bank lattice
grid_interpolation
boolean switch whether coherent grids are interpolated
f
cost function of the form ’[costCoh, costInc] = f(Nseg, Tseg, mCoh, mInc)’
and allow for vector inputs in all four input arguments.
Return structures coefCoh and coefInc have fields {delta, eta, kappa, nDim, cost } corresponding to the local power-law fit of computing cost cost = kappa * mis^{-nDim/2} * Nseg^eta * Tseg^delta according to Eq.(61, 62,63, 64)
Equation numbers refer to Prix&Shaltev, PRD85, 084010 (2012)
## trivial test example first ol = 1e-8; estCostFunction = struct ( "lattice", "Ans", "f", @test_f ); [coefCoh, coefInc] = LocalCostCoefficients_v2 ( testCostFunction, 100, 86400, 0.5, 0.3 ); assert ( coefCoh.eta, 2.2, tol); assert ( coefCoh.delta, 4.4, tol ); assert ( coefCoh.nDim, 3.3, tol ); assert ( coefCoh.kappa, pi, tol ); assert ( coefCoh.lattice, "Ans"); assert ( coefInc.eta, 4.2, tol); assert ( coefInc.delta, 1.4, tol ); assert ( coefInc.nDim, 1.3, tol ); assert ( coefInc.kappa, pi, tol ); assert ( coefInc.lattice, "Ans");
option
, val, option
, val, … )Computes a *self-consistent* solution for (locally-)optimal StackSlide
parameters, given computing cost-functions (coherent and incoherent) and
constraints (cost0
, TobsMax
, TsegMax
…)
costFuns
structure containing parameters and cost-function handle
grid_interpolation
boolean flag about whether to use coherent-grid interpolation or not
nDim
(optional) fix number of dimensions [default: compute from cost scaling]
lattice
string specifying the template-bank lattice to use
fun
cost-function handle, of the form [ costCoh, costInc ] = fun(Nseg, Tseg, mCoh, mInc) where the cost function must accept vector-arguments (of equal length or scalar)
cost0
total computing cost (in CPU seconds),
You can optionally provide the following additional constraints
TobsMax
maximal total observation time
TsegMin
minimal segment length
TsegMax
maximal segment length
stackparamsGuess
initial "guess" for solution, must contain fields {Nseg, Tseg, mCoh, mInc }
pFA
false-alarm probability at which to optimize sensitivity [1e-10]
pFD
false-dismissal probability (=1-detection-probability) [0.1]
tol
tolerance on the obtained relative difference of the solution, required for convergence [1e-2]
maxiter
maximal allowed number of iterations [10]
hitmaxtimes
how many times solutions are allowed to rail againt constraints before giving up [1]
minMismatch
minimum allowed mismatch for solution [0]
sensApprox
sensitivity approximation to use in SensitivityScalingDeviationN()
, one of:
none
[default]
Gauss
WSG
nonlinearMismatch
use empirical nonlinear mismatch relation instead of linear ‘mis’ = xi * m
The return structure ’sol’ has fields {Nseg
, Tseg
, m
} where
Nseg
the optimal (fractional!) number of segments
Tseg
the optimal segment length (in seconds)
m
the optimal grid mismatch
Compute the deviation parameter w of the local StackSlide-sensitivity power-law scaling coefficient from the weak-signal limit (where w=1). In the Gaussian weak-signal limit ("WSG"), the critical non-centrality RHO^2 scales exactly as RHO^2 ~ N^(1/2), and threshold signal-strength hth therefore scales as ~ N^(-1/4).
In general the N-scaling deviates from this, and we can locally describe it as a power-law of the form RHO^2 ~ N^(1/(2w), and hth ~ N^(-1/(4w)), respectively, where w quantifies the devation from the WSG-scaling.
approx
== "none":
use full chi^2_(4*Nseg) distribution
approx
== "Gauss":
use the Gaussian (N>>1) approximation
approx
== "WSG":
return w=1 for the "weak-signal Gaussian" case
Nseg is allowed to be a vector, in which case the return w is also a vector.
tol = -1e-6; pFD = 0.1; ## compare numbers to those from Prix&Shaltev,PRD85,084010(2012) wGauss1_10 = SensitivityScalingDeviationN ( 1e-10, pFD, 1, approx = "Gauss" ); assert ( wGauss1_10, 1.38029957237533, tol ); wGauss13_10 = SensitivityScalingDeviationN ( 1e-10, pFD, 13, approx = "Gauss" ); assert ( wGauss13_10, 1.15371666877782, tol ); w1_2 = SensitivityScalingDeviationN ( 1e-2, pFD, 1, approx = "none" ); assert ( w1_2, 1.88370817833829, tol ); w13_2 = SensitivityScalingDeviationN ( 1e-2, pFD, 13, approx = "none" ); assert ( w13_2, 1.29212548567877, tol );
Next: cw-sensitivity, Previous: cw-metric-template-banks, Up: Directory Index [Contents][Index]