1 Updates of which to be aware for version 7.8.x

1.1 Applying CMPs to get C1

The package now contains the 2023 data in the format used by the CMPs. This is a dset list object called data_2023 and contains data for the East, data_2023[[1]], and the West, data_2023[[2]]. There are four simulations in the data_2023 object just so that it is clear which of the data are being generated by the model in the interim years (the US RR indices which are no longer used by CMPs anyway)

Also included are four of the real CMPs that have been developed and tuned: BR, LW, TC and FO (FO requires the fishmethods R package).

Using these we can examine the data and calculate TAC advice for the first year (C1):


 data_2023[[1]]@Iobs # Indices available for TAC advice in 2023 (ie up to 2021)
 data_2023[[1]]@Cobs # Catches in the East for TAC advice in 2023 (ie up to 2021)
 data_2023[[2]]@Cobs # Catches in the West for TAC advice in 2023 (ie up to 2021)
 
 avail('MP')
 sim=1
 
 TAC_E_BR <- BR5a(sim,dset[[1]]) 
 TAC_W_BR <- BR5a(sim,dset[[2]])
 
 TAC_E_TC<- TC5a(sim,dset,AS=1) # East
 TAC_W_TC<- TC5a(sim,dset,AS=2) # West
 
 # run an MSE if you want:
 testMSE<-new('MSE',OM_1t,MPs=list(c("BR_E","BR_W"),c("TC","TC"),c("LW_E","LW_W")),returnPPD=T)

 # check static (deterministic C1)
 year2023<-59
 testMSE@CWa[,,1,year2023] # Eastern [MP, sim]
 testMSE@CWa[,,2,year2023] # Western [MP, sim]
 

2 Updates of which to be aware for version 7.7.x

2.1 Weighted PGK tuning

The new priority for CMP tuning is PGK, the probability green Kobe (underfished subject to underfishing) after 30 projection years.

Similarly to Br30_Wt() the weighted mean PGK can be calculated using the function PGK_Wt() that works on a list of reference grid MSE objects. To run the MSEs and return the weighted PGK you could use this code:


 MSEfunc<- function(i) { new('MSE', OM=get(OMs[i]), MPs=list(c('U5','U5'),c('U10','U10'))) # make a run MSE function
 OMs<-paste0("OM_",1:48,"t") # name all the tuning OMs
 MSElist<-sapply(1:length(OMs),MSEfunc) # store all deterministic MSE results in a list
 PGK_Wt(MSElist) # return a table of weighted Br30 tunings, defaults to median, q = 0.5
 

Alternatively, if you have already run your stochastic tuning MSEs for the reference set, you could load the list of MSE objects and apply the function:


 MSEdir<-"C:/temp/AIMP4_tune_10/Tune_1_2/"
 MSEfiles<-paste0(MSEdir,"MSE_",1:48,".rda") 
 MSElist<-sapply(1:48,function(x)readRDS(MSEfiles[x])) # store all stochastic MSE results in a list
 lapply(MSElist,function(x)x@Name) # check these are in correct order (1 - 48)
 PGK_Wt(MSElist) # return a table of weighted Br30 tunings, defaults to median, q = 0.5
 

2.2 Running MSE for a 3 year update interval

The management interval over which TAC updates are made defaults to 2 years and is specified in a slot in each OM object:


library(ABTMSE)
loadABT()
OM_1@interval

To run the MSEs with a 3-year update interval you need to create a new set of OMs with their own naming convention before you run an MSE with them. Here is some code to do that:


OMs<-paste0("OM_",1:48,"t")      # names of the reference set stochastic tuning OMs
OMs3<-paste0(OMs,"_i3")          # new names of the same OMs, that will have a 3-year update interval

for(i in 1:length(OMs)){
  nuOM<-get(OMs[i])              # retrieve the regular OM with the default 2-year update interval
  nuOM@interval <- as.integer(3) # change to a 3-year interval
  assign(OMs3[i],nuOM)           # assign the modified OM to the new name (e.g. OM_1t_i3)
}

# make an MSE function that can be applied to the vector of OM names
MSEfunc<- function(i) { new('MSE', OM=get(OMs3[i]), MPs=list(c('TC1','TC1'),c('TC2','TC2')))}

# Run the MSE function on the new 3-year interval OM objects
MSElist<-sfSapply(1:length(OMs3),MSEfunc)

2.3 Designing an MP with lower variance in catch recommendations

You may wish to reduce the variability in the management advice provided by your CMP. This can be useful for evaluating the biological and yield performance trade-offs that occur for a given CMP type that gives more stable advice.

One way to do this is to calculate the implied TAC update fraction, reduce (or increase) this and then apply the adjusted update. Here is an approach whereby the change in advice is log transformed, reduced and then exponentiated to get the new update. Here is an example where a simple constant F rate MP is updated to reduce the implied TAC change:


# some dummy variables for testing the internal calculations of the TR1 CMP below
x<-1; dset<-dset_example_East; TACfac=11000; index=4; varcfac=0.43; maxUp=0.2; maxDown=0.2

TR1<-function(x, dset, TACfac=11000, index=4, VarCadj=0.43, maxUp=0.2, maxDown=0.2)

  cury<-dim(dset$TAC)[2]  # Most recent TAC advice year
  ny<- dim(dset$Iobs)[3]  # Most recent Index observation
  
  newTAC<-dset$Iobs[x,index,ny]*TACfac            # calculate TAC advice as a fixed ratio of the index (ie fixed harvest rate)
  oldTAC<-dset$TAC[x,cury]                        # get the previous TAC advice
  TAC_change <- newTAC/oldTAC                     # the implied ratio of the new to old advice
  
  new_TAC_change <- exp(log(TAC_change)*VarCadj)  # multiple log of TAC change by VarCadj
  
  if(new_TAC_change>(1+maxUp))new_TAC_change=(1+maxUp)       # apply 20% max upward TAC change constraint
  if(new_TAC_change<(1-maxDown))new_TAC_change=(1-maxDown)   # apply 20% max downward TAC change constraint
  
  adjTAC <- oldTAC * new_TAC_change                          # apply new TAC change
  
  adjTAC                  # return adjusted TAC (approximately half the adjustment downwards / upwards from previous TAC)
  
}

3 Updates of which to be aware for version 7.4.x

3.1 Stochastic tuning OMs + weighted Br30 tuning

In order to undertake tuning to stochastic simulations for the reference grid, a grid of tuning OMs (e.g. OM_1t, OM_2t, …, OM_48t) has been developed that have only 4 stochastic simulations each and a unique random seed for each OM. Note that this approach does not perfectly match tuning in the full grid of stochastic OMs, but is much closer than the previous approach using the deterministic grid OMs.

This can be used as before with the function Br30_Wt() which was developed to return the weighted median of Br30 for each stock and MP given a list of reference grid MSE objects


 MSEfunc<- function(i) { new('MSE', OM=get(OMs[i]), MPs=list(c('U5','U5'),c('U10','U10'))) # make a run MSE function
 OMs<-paste0("OM_",1:48,"t") # name all the tuning OMs
 MSElist<-sapply(1:length(OMs),MSEfunc) # store all stochastic tuning MSE results in a list
 Br30_Wt(MSElist) # return a table of weighted Br30 tunings, defaults to median, q = 0.5
 Br30_Wt(MSElist,q=0.1) # calculate weighted 10th percentile
 OM_wt # take a look at the weights that were used
 

4 Updates of which to be aware for version 7.3.x

4.1 Bluefin Tuna MSE Splash page

Bluefin MSE enthusiasts will be pleased to know that there is now a central webpage for accessing up-to-date documentation and packages.

4.2 Tuning targets

CMP developers should use a two character code followed by an integer number (0-4) corresponding to the tuning target:

4.3 Shiny App has weighted results

Each OM set (OM Set 1 on the left hand side of the app and OM Set 2 and the right hand side) has a check box for OM weighting.

This then applies the current OM weighting scheme to the quantile calculations of all panels with the exception of the projection plots in panels (‘Proj.’ and ‘Stoch. Proj’). Unfortunately OM weighting was too computationally intensive for the quantile calculations over all projected years of these projection plots.

4.4 Normalization by simulation in the Zeh panel of the Shiny app

Assume that one CMP always outperformed another CMP under all simulated conditions (all OMs an simulations) but variability among simulations was so high that this consistent out-performance was masked? To reveal this, you could calculate the mean performance among CMPs for each simulation and then plot the relative difference. Without this, the distribution of CMP results would look similar, with the normalization by simulation, one CMP would have a bar consistently higher than the other. This would reveal the expected performance difference controlling for the unknown simulated conditions.

This feature is now available the Zeh panel fo the Shiny app:

Note that this makes the scale on the y-axis relative (scale free)

4.5 AAVC average annual variability in catch now fixed

This is now the mean absolute change in yield among TAC updates across the first 30 projected years of the projection (as opposed to the median).

4.6 New performance metric: OFT

OFT (Overfished Trend) returns the slope in log(SSB) over years 31:35 if the stock is below dynamic SSBMSY in projection year 30. If the stock is above dynamic SSBMSY in year 30 OFT is equal to 0.1.

5 Updates of which to be aware for version 7.1.1

5.1 Robustness OMs

The 44 robustness set OMs are now included in the package (see Table 1 below).

Table 1. Robustness OMs included in this package. These are single factor deviations from the following reference grid OMs: 1A–L, 2A–L, 1B–L, 2B–L.

Note that ROMs for the non-linear index specification (ROM #25 - 28, Code NLindex_1 - NLindex_4) are just placeholders (the unmodified reference grid OMs) as these did not converge when conditioned and there was insufficient time to debug the issue.


5.2 Results_compiler is back

The function that compiles all MSE runs across the reference and robustness OMs is now working and compatible with the latest version of the shiny app.

See below for details. Results_compiler works as before, but now

  • you have to run your CMPs for the 44 ROM robustness set of OMs)

AND

  • where possible please include your three character CMP labels (e.g. c(“TC1”, “TC2”, “TC3”, “TC4”)). There was much confusion before because the MP names were not the same as the codes in the SCRS papers! See the argument CMPcode below:

CompRes <- Results_compiler(dir = "C:/MSE",                                          # directory where the MSE.rda files are (defaults to current working directory)
                            name = "Toms_CMP_testing",                               # name of your MSE run
                            CMPdesc = c("USA1 is CMP that works this way",           # provide a brief description of your CMPs
                                        "USA2 is a CMP that works this way"),
                            CMPcode = c("US1","US2"))

5.3 Shiny App has been updated

The Shiny App has been updated to reflect the changes in the design of the reference grid and robustness set OMs.

You can try it out here

The 2020 version prior to conditioning is available here

6 Updates of which to be aware for version 7.x.x

6.1 !!! WARNING !!! Change in index numbering for simulated data

CMP developers beware: the new reconditioned OMs used differing indices from the previous set. You must check the index numbering in the new package to make sure it is still correct for your CMPs. To do this:

ABT-MSE objects loaded

library(ABTMSE)
packageVersion('ABTMSE')  # Should be v7.0.1 

[1] ‘7.8.3’

loadABT()                 # Load all OMs, OMI files, Obs models etc

ABT-MSE objects loaded

Indices
No             Name Stock Quarter Area      Min       Max         Mean       Median First year Last year

1 1 FR_AER_SUV2 1 3 7 0.014 0.136 5.584719e-02 4.670780e-02 2009 2021 14 2 MED_LAR_SUV 1 2 7 1.825 99.899 2.636493e+01 1.873870e+01 2001 2021 35 3 GOM_LAR_SUV 2 2 1 0.158 5.512 1.028363e+00 6.340575e-01 1977 2021 80 4 GBYP_AER_SUV_BAR 1 2 7 1386.637 13418.841 5.991643e+03 4.723028e+03 2010 2021 92 5 MOR_POR_TRAP 1 2 4 63.259 158.075 1.021735e+02 9.866094e+01 2012 2021 102 6 JPN_LL_NEAtl2 1 4 5 2.254 8.882 6.633604e+00 6.824412e+00 2010 2021 114 7 US_RR_66_114 2 3 2 0.340 2.690 9.996154e-01 8.900000e-01 1995 2020 140 8 US_RR_115_144 2 3 2 0.110 2.360 1.000000e+00 8.650000e-01 1995 2020 166 9 US_RR_177 2 3 2 0.279 2.518 9.704141e-01 7.861579e-01 1993 2021 195 10 JPN_LL_West2 2 4 2 0.183 2.160 1.022192e+00 9.273449e-01 2010 2021 207 11 CAN GSL 2 3 3 0.010 0.437 1.493131e-01 9.647727e-02 1988 2021 241 12 CAN SWNS 2 3 2 0.507 2.658 1.512114e+00 1.526137e+00 1996 2021 267 13 US_RR_66_144 2 3 2 0.376 2.128 1.041782e+00 9.029592e-01 1995 2021 294 14 MEXUS_GOM_PLL 2 2 1 0.210 2.240 1.036968e+00 9.148459e-01 1994 2021

dset_example_East$Iobs[1,3,] # First simulation of the 3rd index (n year columns where column 55 is 2019)

[1] NA NA NA NA NA NA NA NA NA NA NA NA 3.0447849 5.5122843 NA NA 1.0711127 1.5475685 1.4336471 0.4088582 NA 0.4393211 0.3901021 1.5041399 0.9881578 0.4174180 0.3834304 0.5576483 0.5835796 0.7215440 0.3124341 1.0260477 0.4408317 0.1582311 0.6340575 0.3019861 0.5438015 0.3393724 0.9293719 0.6880600 0.2333815 0.7026620 0.5799291 0.4259779 0.7640915 0.4016830 1.3528321 0.3616532 1.2268263 0.3436523 0.5057858 3.0367286 1.2425613 2.5281735 1.9162709 NA 2.1628698 2.8371319 1.4596029 2.2859021 3.5080011 7.4607589 6.5089007 5.4611323 [65] 2.1631174 3.2398648 6.0873773 3.9624870 1.6009174 4.0839939 3.3943004 2.6908588 6.3350965 3.9168989 3.8397656 3.1894454 3.2944070 8.0720397 5.2233217 3.5248721 1.3719716 2.2666611 2.1091970 3.3966287 3.9603215 4.1201402 4.1677906 5.0911860 5.8208445 1.3741426 4.0116513 5.2266302 2.6508640 3.3199345 6.8984511 3.9003123 2.6463963 1.7556261 6.3471776 5.3202185 3.0574772 2.5469254 2.6198682 4.1966147 4.3859324 4.5455850 7.8713406

6.2 !!! WARNING !!! Change in MSE run code

To reduce the possibility of errors, OMs now include the required observation error model, implementation error model, recruitment stochasticity and management interval information. E.g.


OM_1@Obs

[1] “Good_Obs”

OM_1d@Obs

[1] “Perfect_Obs”


OM_1@IE

[1] “Umax_90”

OM_1d@IE

[1] “Umax_90”


OM_1@Deterministic

[1] FALSE

OM_1d@Deterministic

[1] TRUE


OM_1@interval

[1] 2

OM_1d@interval

[1] 2

This means that you no longer have to specify these when running an MSE; you just specify the OM and the MPs and the new OM slots do the rest:


myMPs<-list(c('U5','U5'),
            c('MeanC','DD_i4'),
            c('MeanC','MeanC'))

MSE_example<-new('MSE',OM=OM_1d,MPs=myMPs)

Including the implementation error model means that robustness OMs can include the overages and hyperstability without the user having to include more arguments to the MSE run code above.

6.3 Reconditioning

48 reference grid OMs were refitted with data to 2019 including changes to: * Indices * CAL data for 2016 and before * Senescence M for ages 26+ higher for low M (B) scenarios * Dropping the western mixing factor (all reference grid OMs are western mixing of 1% - the old factor level I)

There are now ‘only’ 48 reference grid OMs. Here is the match up in numbering with among new reconditioned OMs and the old OMs:

6.4 New performance metrics

Note that the package now includes AvC10, AvgBr and PGT performance metrics. You can find out more about these using the ? at the command line. e.g. ?PGT


7 Introduction

The objective of this document is to provide a concise guide to designing, debugging, testing and tuning candidate management procedures (CMPS) using the Atlantic Bluefin Tuna MSE R package. For further information on the operating model structure, conditioning of operating models, data used in conditioning and other features of the R package, readers are referred to the more extensive ABT MSE R package user guide and the Trials Specifications document.

Atlantic Bluefin tuna are managed by the setting of TACs for the East and West Atlantic areas. The ABT MSE operating model is unique in that it models the mixing of Eastern and Western stocks through these two areas. Consequently management advice for the East impacts both Eastern and Western stocks, and similarly fo the West. This means that there is no way to evaluate a West Area Management Procedure in isolation without also specifying an East Area Management Procedure. Unlike the testing of management procedures elsewhere, Atlantic bluefin tuna MP development must test pairs of CMPs simultaneously, including a CMP for the West area combined with a CMP for the East area.


8 Installation

In this installation script I am assuming that your R package installer and other files are stored in a folder ‘C:/ABT-MSE/’.

  1. Install R for Windows
  2. Install RStudio
  3. Install Package dependencies
source("C:/ABT-MSE/Depends.r") 
  1. Install the ABTMSE R package by opening RStudio and entering the following code at the R console (making sure to change the file path to reflect where you put the ABT-MSE folder on your computer):
install.packages("C:/ABT-MSE/ABTMSE_7.0.1.tar.gz") 
  1. Check that the installation is successful by finding this help file (the ABTMSE vignette):
library(ABTMSE)
packageVersion('ABTMSE')

[1] ‘7.8.3’

??ABTMSE
  1. Load all Package data and objects into the current R session:
loadABT()

ABT-MSE objects loaded

If you have any difficulties please send an inquiry including some reproducible code to:


9 Useful features

9.1 Compiled results manipulation

Compiled results produced by the Results_compiler() function can be joined using the function Join_Results().

A compiled results object can also be reduced down to a subset of MPs using Sub_Results()


CompRes1 <- Results_compiler(dir = "C:/MSE1")
CompRes2 <- Results_compiler(dir = "C:/MSE2")

CompResAll <- Join_Results(list(CompRes1,CompRes2))

ind <- CompResAll$MPnames!="MP_I_wish_to_remove"

CompResSub <- Sub_Results(CompResAll,ind)

9.2 MSE results compilation

The MSE technical assistant no longer has access to cloud computing and thus each CMP developer is expected to provide compiled results on a shared google drive.

To facilitate this, the package now includes a function Compile_results() which allows users to store the essential results information from all MSE runs for their CMP(s) so that these can be easily shared with the MSE technical assistant and other CMP developers.

A full set of MSE objects for all reference set and robustness set operating models can be over 2Gb in size. The compiled results object for all of these runs is around 30-60 Mb and therefore a much more manageable file for sharing.

The following example script shows a simple way to run the MSEs and compile results for the deterministic operating models. The final results object (in the example below CompRes_USA.rda) should be uploaded to the shared Google drive. The object can also be uploaded to the revised shiny App so that users can also view their MSE results in that format.


library(ABTMSE)
loadABT()
setwd("C:/MSE")                                             # Your working directory

# --- 1 --- Load and select CMPs ----------------------------------------------

source("myCMPs.R")                                          # An R script containing all of your CMPs 
myCMPs<-list(c("USA1E","USA1W"),c("USA2E","USA2W"))         # A named list of the CMPs you'd like to run

# --- 2 --- Run all MSEs ------------- ~ 10 mins per OM so ~ 1-2 days computation with a single processor (memory constraints limit parallel computation)

# Reference set operating models

for(i in 1:48){
  tempMSE<-new('MSE', OM=get(paste0('OM_',i,'d')), MPs=myCMPs)     # Deterministic OMs, defaults to 2-year interval, Imax_90 imp. models
  saveRDS(tempMSE,file=paste0(getwd(),"/MSE_",i,".rda"))     # saves a set of objects named MSE_1.rda, MSE_2.rda, MSE_3.rda, ..., MSE_96.rda
}

# Robustness set operating models

for(i in 1:44){
  tempMSE<-new('MSE',OM=get(paste0('ROM_',i,'d')),MPs=myCMPs)    # Deterministic OMs, defaults to 2-year interval, Imax_90 imp. models
  saveRDS(tempMSE,file=paste0(getwd(),"/MSE_R_",i,".rda"))       # saves a set of objects named MSE_R_1.rda, MSE_R_2.rda, ..., MSE_R_12.rda
}


# --- 3 --- Compile results ----------------------------------------------------

CompRes <- Results_compiler(dir = "C:/MSE",                                          # directory where the MSE.rda files are (defaults to current working directory)
                            name = "Toms_CMP_testing",                               # name of your MSE run
                            CMPdesc = c("USA1 is CMP that works this way",           # provide a brief description of your CMPs
                                        "USA2 is a CMP that works this way"),
                            CMPcode = c("US1","US2"))

saveRDS(CompRes,"C:/CompRes_USA.rda")    # save a compiled results object for uploading to the google drive
                                         # https://drive.google.com/drive/folders/1_KlzO13-0E3dFYQWq5k8iWWp9tgR1aRB?usp=sharing 
                                         # or vizualization via the shiny app (http://142.103.48.20:3838/ABTMSE/)

9.3 Shiny App

The App is currently for the previous set of OMs and CMPs (ie before reconditioning ABTMSE v6.x.x)

The Shiny App for vizualizing ABTMSE results is hosted here.

You can also run the App locally on your computer (for now the default is a presentation of 3% and 5% harvest rate CMPs for demonstration purposes only):

library(ABTMSE)
Shiny('ABTMSE')

New features include:

  • Violin plot option
  • Load / save of user results files created with the Results_compiler() function
  • Global MP filtering / selection
  • Zeh plots over all OMs for a single performance metric and CMP (and color coding by factor level)
  • Trade-off plots with error bars
  • Stochastic (3 CMPs) and deterministic projection plots (all CMPs)
  • Downloadable results tables
  • Toggles for reference and robustness sets
  • Glossaries for CMPs and performance metrics
  • performance output tab that provides East-West performance trade-offs via a Radar Plot.

For each iteration of CMP results comparisons, the technical assistant will compile all results and make these the default presentation when users go to the app.

Please send any bug reports or suggestions for improvements / new figures etc to

9.4 Plotting indices

At the end of an MSE run, the ‘true’ vulnerable biomass corresponding to each index is stored in addition to the indices that were simulated with error and autocorrelation. These are slots VBi and Iobs, respectively. The are 4 dimensional: MP x simulation x index x year. These include historical and future projections (remember year 52 is 2016).

You can plot these slots using the function plot_Indices():

plot_Indices(MSE_example,MPs=c(1,3),index="GOM_LAR_SUV")

9.5 Plotting recruitment

You can plot the recruitment simulated for a pair of MPs using the function plot_Recruitment(MSE_example) this provides historical and projected SSB, SSB0, SSBMSY, asymptotic recruitment, stochastic recruitment and projected deterministic stock-recruit relationships.

plot_Recruitment(MSE_example)

9.6 Check mode

Running the MSE with check = T:


checkMSE <- new('MSE', OM=OM_1d, check=T)

Constructing arrays Calculating historical fishing mortality rate at length (computationally intensive) Initializing simulations Running historical simulations

now shows perfect recreation of M3 dynamics in the ABT-MSE framework. You can also see plots of the expected spatial distribution of catch-at-age by fleet in the final year.

You will furthermore see the historical recreation of each of the indices available to MPs for future projections.

10 Operating models

10.0.1 Reference Grid

An MSE design object Design was also loaded with loadABT(). Design is a list of the factors and their levels used in defining the reference set of operating models, e.g. the MSE design outlined in the Trial Specifications document.

Here is a figure showing how the reference OM numbering relates to the four factors (recruitment, spawning fraction / natural mortality rate M, sacle, and weighting of length composition data):

The list item all_levs shows all levels that were used in defining the factorial design of the MSE reference operating models:

Design$all_levs

[[1]] [1] “1” “2” “3”

[[2]] [1] “A” “B”

[[3]] [1] “–” “-+” “+-” “++”

[[4]] [1] “L” “H”

A text description of these is also available:

Design$all_lnams

[[1]] [1] “1: West: h=0.6 to h=0.9 1975+, East: h=0.98 for 1987- to h=0.98 1988+” “2: West: B-H h=0.6 all years, East: B-H h=0.7 all years” “3: West: post 75+ changes to pre ’75 after 10 yrs, East: 88+ to ’50-87 after 10 years”

[[2]] [1] “A: Younger spawning, High M” “B: Older spawning, Low M”

[[3]] [1] “–: mean SSB 15kt West, 200kt East” “-+: mean SSB 15kt West, 400kt East” “+-: mean SSB 50kt West, 200kt East” “++: mean SSB 50kt West, 400kt East”

[[4]] [1] “L: Low length composition weight of 1/20” “H: High length composition weight of 1”

The full design matrix can be found in the list item Design_Ref and is just the factorial combination of these levels:

Design$Design_Ref

Var1 Var2 Var3 Var4 1 1 A – L 2 2 A – L 3 3 A – L 4 1 B – L 5 2 B – L 6 3 B – L 7 1 A -+ L 8 2 A -+ L 9 3 A -+ L 10 1 B -+ L 11 2 B -+ L 12 3 B -+ L 13 1 A +- L 14 2 A +- L 15 3 A +- L 16 1 B +- L 17 2 B +- L 18 3 B +- L 19 1 A ++ L 20 2 A ++ L 21 3 A ++ L 22 1 B ++ L 23 2 B ++ L 24 3 B ++ L 25 1 A – H 26 2 A – H 27 3 A – H 28 1 B – H 29 2 B – H 30 3 B – H 31 1 A -+ H 32 2 A -+ H 33 3 A -+ H 34 1 B -+ H 35 2 B -+ H 36 3 B -+ H 37 1 A +- H 38 2 A +- H 39 3 A +- H 40 1 B +- H 41 2 B +- H 42 3 B +- H 43 1 A ++ H 44 2 A ++ H 45 3 A ++ H 46 1 B ++ H 47 2 B ++ H 48 3 B ++ H

This design grid explains the nomenclature of the various operating models (class OM) and their input definition files (class OMI) which are numbered from 1 to 48 corresponding to the rows of this design matrix:

avail('OM')

[1] “OM_1” “OM_10” “OM_10d” “OM_10e” “OM_10s” “OM_10t” “OM_11” “OM_11d” “OM_11e” “OM_11s” “OM_11t” “OM_12” “OM_12d” “OM_12e” “OM_12s” “OM_12t” “OM_13” “OM_13d” “OM_13e” “OM_13s” “OM_13t” “OM_14” “OM_14d” “OM_14e” “OM_14s” “OM_14t” “OM_15” “OM_15d” “OM_15e” “OM_15s” “OM_15t” “OM_16” “OM_16d” “OM_16e” “OM_16s” “OM_16t” “OM_17” “OM_17d” “OM_17e” “OM_17s” “OM_17t” “OM_18” “OM_18d” “OM_18e” “OM_18s” “OM_18t” “OM_19” “OM_19d” “OM_19e”
[50] “OM_19s” “OM_19t” “OM_1d” “OM_1e” “OM_1s” “OM_1t” “OM_2” “OM_20” “OM_20d” “OM_20e” “OM_20s” “OM_20t” “OM_21” “OM_21d” “OM_21e” “OM_21s” “OM_21t” “OM_22” “OM_22d” “OM_22e” “OM_22s” “OM_22t” “OM_23” “OM_23d” “OM_23e” “OM_23s” “OM_23t” “OM_24” “OM_24d” “OM_24e” “OM_24s” “OM_24t” “OM_25” “OM_25d” “OM_25e” “OM_25s” “OM_25t” “OM_26” “OM_26d” “OM_26e” “OM_26s” “OM_26t” “OM_27” “OM_27d” “OM_27e” “OM_27s” “OM_27t” “OM_28” “OM_28d”
[99] “OM_28e” “OM_28s” “OM_28t” “OM_29” “OM_29d” “OM_29e” “OM_29s” “OM_29t” “OM_2d” “OM_2e” “OM_2s” “OM_2t” “OM_3” “OM_30” “OM_30d” “OM_30e” “OM_30s” “OM_30t” “OM_31” “OM_31d” “OM_31e” “OM_31s” “OM_31t” “OM_32” “OM_32d” “OM_32e” “OM_32s” “OM_32t” “OM_33” “OM_33d” “OM_33e” “OM_33s” “OM_33t” “OM_34” “OM_34d” “OM_34e” “OM_34s” “OM_34t” “OM_35” “OM_35d” “OM_35e” “OM_35s” “OM_35t” “OM_36” “OM_36d” “OM_36e” “OM_36s” “OM_36t” “OM_37”
[148] “OM_37d” “OM_37e” “OM_37s” “OM_37t” “OM_38” “OM_38d” “OM_38e” “OM_38s” “OM_38t” “OM_39” “OM_39d” “OM_39e” “OM_39s” “OM_39t” “OM_3d” “OM_3e” “OM_3s” “OM_3t” “OM_4” “OM_40” “OM_40d” “OM_40e” “OM_40s” “OM_40t” “OM_41” “OM_41d” “OM_41e” “OM_41s” “OM_41t” “OM_42” “OM_42d” “OM_42e” “OM_42s” “OM_42t” “OM_43” “OM_43d” “OM_43e” “OM_43s” “OM_43t” “OM_44” “OM_44d” “OM_44e” “OM_44s” “OM_44t” “OM_45” “OM_45d” “OM_45e” “OM_45s” “OM_45t”
[197] “OM_46” “OM_46d” “OM_46e” “OM_46s” “OM_46t” “OM_47” “OM_47d” “OM_47e” “OM_47s” “OM_47t” “OM_48” “OM_48d” “OM_48e” “OM_48s” “OM_48t” “OM_4d” “OM_4e” “OM_4s” “OM_4t” “OM_5” “OM_5d” “OM_5e” “OM_5s” “OM_5t” “OM_6” “OM_6d” “OM_6e” “OM_6s” “OM_6t” “OM_7” “OM_7d” “OM_7e” “OM_7s” “OM_7t” “OM_8” “OM_8d” “OM_8e” “OM_8s” “OM_8t” “OM_9” “OM_9d” “OM_9e” “OM_9s” “OM_9t” “OM_example” “ROM_1” “ROM_10” “ROM_10d” “ROM_11”
[246] “ROM_11d” “ROM_12” “ROM_12d” “ROM_13” “ROM_13d” “ROM_14” “ROM_14d” “ROM_15” “ROM_15d” “ROM_16” “ROM_16d” “ROM_17” “ROM_17d” “ROM_18” “ROM_18d” “ROM_19” “ROM_19d” “ROM_1d” “ROM_2” “ROM_20” “ROM_20d” “ROM_21” “ROM_21d” “ROM_22” “ROM_22d” “ROM_23” “ROM_23d” “ROM_24” “ROM_24d” “ROM_25” “ROM_25d” “ROM_26” “ROM_26d” “ROM_27” “ROM_27d” “ROM_28” “ROM_28d” “ROM_29” “ROM_29d” “ROM_2d” “ROM_3” “ROM_30” “ROM_30d” “ROM_31” “ROM_31d” “ROM_32” “ROM_32d” “ROM_33” “ROM_33d”
[295] “ROM_34” “ROM_34d” “ROM_35” “ROM_35d” “ROM_36” “ROM_36d” “ROM_37” “ROM_37d” “ROM_38” “ROM_38d” “ROM_39” “ROM_39d” “ROM_3d” “ROM_4” “ROM_40” “ROM_40d” “ROM_41” “ROM_41d” “ROM_42” “ROM_42d” “ROM_43” “ROM_43d” “ROM_44” “ROM_44d” “ROM_4d” “ROM_5” “ROM_5d” “ROM_6” “ROM_6d” “ROM_7” “ROM_7d” “ROM_8” “ROM_8d” “ROM_9” “ROM_9d”

You’ll notice that some objects in the ABTMSE package have the extension _example and _d. The example objects are stream-lined objects that are suitable for tutorials and demonstrations which involve less computational overhead (e.g. they require fewer simulations and require less system memory).

The ‘d’ objects are deterministic operating models that simply include two simulations only that are identical and fixed to the maximum likelihood values of the operating model fits and do not include either observation error or stochastic recruitment. These are useful for MP testing since they provide a quick evaluation of MP performance at the most probable combination of operating model parameters.

11 Selecting CMPs for testing

Specifying CMPs is somewhat more complex in a multi-stock MSE as they can be specific to data from certain areas. For example DD_MED may be a delay-difference assessment fitted to a relative abundance index for spawning biomass in the Mediterranean (e.g. a larval survey).

To specify management procedures in this MSE framework you must select a MP for each spatially distinct area to which an MP is to be applied. For example, an index-based CMP in the West and a Surplus production stock assessment in the East. Traditionally the spatial division for Atlantic bluefin has been at (mainly) 45 degrees W. In this quick start section we are going to ignore MP areas and the package will assume the default setup which is 2 data areas - 2 MPs, one per area, divided by 45deg W.

A related issue is fleet allocation of regional TACs that are calculated by CMPs (the 2020 allocation is provided by ICCAT and assumed constant into the future - it is an matrix object called ‘Allocation’ [area x fleet] that is loaded with loadABT()).

You can search for MPs and find help documentation on these:

avail('MP')

[1] “BR_E” “BR_W” “CDD_i4” “CurC100” “CurC125” “CurC150” “CurC25” “CurC50” “CurC75” “DD_i4” “DD_i4_4010” “EMP1e” “EMP1w” “EMP2e” “EMP2w” “Fadapt_i4” “Fadapt_i7” “FO_E” “FO_W” “Islope1_i4” “Islope1_i7” “LstepCC4” “LW_E” “LW_W” “MCD_i4” “MeanC” “SBT2” “SP_i4” “SPslope_i4” “U10” “U15” “U20” “U3” “U5” “ZeroC”

?U5

For now we’re going to select a few premade MPs for a demonstration MSE.

As mentioned above, CMPs must be chosen in groups, one corresponding to each area for which data are disaggregated. In the default MSE run there are two per management system (a CMP in the East and a CMP in the West). We are going to use a list object to store our pairs of CMPs:

myMPs<-list(c('U5','U5'),
          c('MeanC','DD_i4'),
          c('MeanC','MeanC'))

This code will test three management systems. One where a fixed harvest rate of 5 percent is used in both the West and the East, a second where mean historical catches are the TAC in the East and a delay-difference model is applied in the West, and a third management system where the TAC is set to mean historical catches in both the West and the East. The principal motivation for choosing these MPs in this tutorial is that they run quickly.

For now we are going to use one of the prebuilt OMs OM_example that only requires the computation of 16 simulations.

12 Running an MSE

Management strategy evaluation is computationally intensive and typically requires the recursive application of statistical models (CMPs) for many independent simulations.

It is a computational problem that is ‘embarrasingly parallel’ and therefore can make use of the extra horsepower of a modern PC which has multiple threads (typically 4 threads for an i5 Intel CPU and 8 threads for a hyperthreaded i7, although it is not uncommon for contemporary workstations to have 32 or more threads).

To use parallel processing we need to initialize a cluster on your computer:

sfInit(parallel=TRUE, cpus=detectCores()) 

R Version: R version 4.2.1 (2022-06-23 ucrt)

To run an MSE it is necessary to specify the:

  • operating model,
  • observation error model,
  • CMPs to be tested,
  • implementation error model (simulating how well CMP recommended TACs are followed in the simulated fishery) and
  • the interval over which MPs will re-run (how often new TAC advice is calculated).

In this preliminary run we select the example operating model OM_example, our custom MPs above, an update interval of 2 years (ie MPrun1 TAC1, TAC1, MPrun2 TAC2, TAC2, MPrun3…

Note that this will take a few minutes to run

MSE_example<-new('MSE',OM=OM_example,MPs=myMPs)

You may have noticed that while we specified only three management systems (three pairs of East/West MPs) in fact, four were tested and that the first, labelled ZeroC (East) and ZeroC (West) we did not specify.

ZeroC is a reference management procedure that recommends close to zero catches (enough to generate catch data but no catch).

The purpose of the ZeroC MP is to generate performance metrics that evaluate future stock size with reference to the highest simulated biomass achievable. In its current configuration the MSE function will always test this ZeroC MP alongside any that you specify.

13 MSE outputs

A very large amount of information has been stored in the MSE object MSE_example. A standard performance metrics table can be calculated using the function getperf():

getperf(myMSE2)

$East AvC10 AvC20 AvC30 C1 C10 C20 C30 D10 D20 D30 LD DNC LDNC POF PNOF POS PNOS PGK PNRK VarC AvgBr Br20 Br30 OFT PrpOF AvUrel ZeroC-ZeroC 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.997 2.468 2.680 2.053 1.000 1.000 0 1 0 1 1 1 0.000 2.433 2.468 2.680 0.1 0 0.000 U5-U5 54.159 58.197 57.771 51.334 54.159 58.197 60.129 1.353 1.410 1.422 1.330 0.533 0.532 0 1 0 1 1 1 2.415 1.404 1.410 1.422 0.1 0 0.594 MeanC-MeanC 27.347 27.367 27.362 27.213 27.347 27.367 27.366 1.663 1.960 2.118 1.694 0.790 0.787 0 1 0 1 1 1 0.096 1.942 1.960 2.118 0.1 0 0.226 MP_E-MP_W 42.818 54.648 55.737 37.800 42.818 54.648 69.746 1.500 1.568 1.434 1.397 0.535 0.535 0 1 0 1 1 1 7.705 1.527 1.568 1.434 0.1 0 0.533

$West AvC10 AvC20 AvC30 C1 C10 C20 C30 D10 D20 D30 LD DNC LDNC POF PNOF POS PNOS PGK PNRK VarC AvgBr Br20 Br30 OFT PrpOF AvUrel ZeroC-ZeroC 0.000 0.000 0.000 0.000 0.000 0.000 0.000 2.044 2.647 2.852 2.117 1.000 1.000 0 1 0 1 1 1 0.000 2.586 2.647 2.852 0.100 0.000 0.000 U5-U5 3.740 3.967 3.919 3.656 3.740 3.967 4.063 1.175 1.088 1.026 0.990 0.364 0.362 1 0 0 1 0 1 6.859 1.085 1.088 1.026 0.100 1.000 1.164 MeanC-MeanC 3.351 3.351 3.351 3.360 3.351 3.351 3.351 1.294 1.390 1.450 1.254 0.509 0.504 0 1 0 1 1 1 0.059 1.402 1.390 1.450 0.100 0.033 0.789 MP_E-MP_W 3.242 4.138 4.221 2.862 3.242 4.138 5.281 1.313 1.226 0.948 0.943 0.331 0.331 1 0 1 0 0 0 5.375 1.180 1.226 0.948 -0.013 0.600 1.176

Two tables are produced in a list object, one table for each stock.

The tables show performance by MP (row) and performance metric (column). Each of these performance metrics is a class PM and has help documentation available to describe it:

avail('PM')

[1] “AvC10” “AvC20” “AvC30” “AvgBr” “AvUrel” “Br20” “Br30” “C1” “C10” “C20” “C30” “D10” “D20” “D30” “DNC” “LD” “LDNC” “OFT” “PGK” “PNOF” “PNOS” “PNRK” “POF” “POS” “PrpOF” “RD30” “VarC”

?Br30
?AvC30

Projection plots can also be produced:

plot(MSE_example)

Each row of this plot represents a pair of CMPs (a management system). Projections of catch and SSB are shown in terms of their median (solid black line), 5th and 95th percentiles (the shaded region) and ten individual simulations (the multicolored lines).

Note that among MPs, each simulation has identical future biological conditions (recruitment, growth etc). In other words a red projected line (a simulation) for one CMP corresponds with the red projected line for the other CMPs. By keeping background simulation conditions the same among CMP runs, the relative performance of CMPs is revealed by many fewer simulations (one CMP cannot outperform another due to sampling a more productive future by chance).

A principal objective of MSE is to reveal the trade-offs among management objectives. The package includes a standard trade-off plot:

Tplot(MSE_example)

Six performance metrics are plotted:

  1. Long Term Yield (mean yield over the final 21-30 projected years)
  2. Probability of being in the Green Kobe region (‘F<FMSY’ and ‘B>BMSY’) over the entire projection.
  3. Probability of being in an overfished state (B<BMSY) over the entire projection
  4. Probability of overfishing (F>FMSY) over the entire projection
  5. Average Annual Variability in Yield (AAVY) the mean fractional difference in catches among years.
  6. Final depletion, the mean spawning stock biomass relative to unfished in the final 21-30 projected years (note that the default definition of ‘unfished’ biomass in this framework is dynamic SSB0 - the spawning stock biomass obtained were zero catches to have occured in the fishery).

In the first row, the six metrics are plotted against each other in three panels for the eastern stock. The second row repeats this for the western stock.

14 Designing Management Procedures

14.1 dset: a standard data format for simulated data

When the MSE runs, in every year that the TAC is updated by a CMP, simulated data are generated. These simulated data are calculated from operating model quantities (e.g. spawning biomass, real simulated catches) subject to autocorrelation and precision defined by the observation error model. These simulated data are used by the CMPs to generate TAC advice.

There is a standard format for simulated datasets.

 ?dset

Two examples were loaded at the start of this session by the loadABT() function. These are ‘snapshots’ of data that have occured at some point in the forward projection of an MSE using an MP:

  • dset_example_East
  • dset_example_West

These example data sets are helpful for designing and testing your own CMPs.

Additionally every time an MSE is run, the simulated dataset in current use is passed to the global environment (called dset). It follows that if the MSE function crashes due to an error in a CMP, it is easier to test the CMP again using the offending simulated data and debug the problem (note that the dset you see if the MSE crashes is a list object so dset[[1]] is the equivalent of dset_example_East, for example.

14.2 Visualizing simulated indices

Plots of the various indices can also be produced from the MSE object slots VBi (true simulated vulnerable biomass) and Iobs (vulnerable biomass subject to error and autocorrelation):

plot_Indices(MSE_example,index="GOM_LAR_SUV")

14.3 Anatomy of an MP

Management Procedures (object class MP) can vary greatly in their complexity ranging from a constant catch MP to an MP that uses data-filtering algorithms linked to stock assessments coupled to harvest control rules.

However, all CMPs have the same basic inputs/outputs: they operate on a simulation number x of a simulated dataset object dset and return a single TAC recommendation.

Lets look at a relatively simple CMP, Islope1 which can have variants depending on which index you want to use (below is the example that uses index number 4):

 avail('MP')

[1] “BR_E” “BR_W” “CDD_i4” “CurC100” “CurC125” “CurC150” “CurC25” “CurC50” “CurC75” “DD_i4” “DD_i4_4010” “EMP1e” “EMP1w” “EMP2e” “EMP2w” “Fadapt_i4” “Fadapt_i7” “FO_E” “FO_W” “Islope1_i4” “Islope1_i7” “LstepCC4” “LW_E” “LW_W” “MCD_i4” “MeanC” “SBT2” “SP_i4” “SPslope_i4” “U10” “U15” “U20” “U3” “U5” “ZeroC”

 ?Islope1_i4
 Islope1

function(x,dset,yrsmth=5,lambda=0.4,xx=0.2,ii){ ny<-length(dset\(Cobs[x,]) ind<-(ny-(yrsmth-1)):ny C_dat<-dset\)Cobs[x,] if(is.na(dset\(MPrec[x])){TACstar<-(1-xx)*mean(C_dat) }else{TACstar<-dset\)MPrec[x]} I_hist<-dset\(Iobs[x,ii,ind] yind<-1:yrsmth slppar<-summary(lm(I_hist~yind))\)coefficients[2,1:2] Islp <-slppar[1] TACstar(1+lambdaIslp) } <bytecode: 0x000001d8481cd310> <environment: namespace:ABTMSE>

 Islope1_i4

function(x,dset)Islope1(x,dset,ii=4) <bytecode: 0x000001d8481bdd80> <environment: namespace:ABTMSE> attr(,“class”) [1] “MP”

Islope1 creates a reference catch level TACstar based on historical catch levels and an adjustment factor “xx”. The MP then makes TAC recommendations with the aim of keeping a constant index level.

Probably the easiest way to understand how an MP works is to design your own.

14.4 Designing and testing a simple average catch MP

Here we will create a new CMP and make it compatible with the ABT-MSE framework. Lets start off simple and design a CMP that sets TACs according to historical average catches.

Simulated data objects dset have a slot for observed historical catches that is labelled Cobs. Lets examine one of the examples:

 dim(dset_example_East$Cobs)

[1] 16 107

This is a matrix with nsim rows and nyear columns. We have to design an MP that will, for any simulation number x, return the mean historical catch. In R this is very terse:

 MeanC<-function(x,dset) mean(dset$Cobs[x,])
 class(MeanC)<-"MP"

The first argument of a CMP function must be the simulation number x. The second argument must be the simulated dataset dset. This ensures that the CMP is compatible with parallel processing using the sfSapply() function.

Also required is that the CMP returns a single numeric value that is the TAC recommendation, in this case the mean historical catch level for that simulation x.

In the second line of code we assign a class of MP to our new MeanC management procedure. It is important to do this for your CMP because it allows users to search for MPs.

Now we can test our CMP using some example data. The simplest way is to pick a simulation number in this case the 1st simulation:

 MeanC(x=1,dset=dset_example_East)

[1] 41965436

Lets make sure this works for all simulations using the regular sapply() function:

 nsim<-nrow(dset_example_West$Cobs)
 sapply(1:nsim,MeanC,dset_example_East)

[1] 41965436 42025014 40949364 42250954 41420207 42012811 42501265 39906905 42115190 42266253 41861549 41721875 41736583 41576188 42525146 41150887

Lastly you could make sure that the MP works in parallel (there are no functions or objects that the cluster can’t see):

 sfInit(parallel=T,cpus=detectCores())
 sfSapply(1:nsim,MeanC,dset_example_East)

[1] 41965436 42025014 40949364 42250954 41420207 42012811 42501265 39906905 42115190 42266253 41861549 41721875 41736583 41576188 42525146 41150887

Note that for simple CMPs like this, parallel processing isn’t necessary and might even be a bit slower due to the overhead of sending data to the cluster. However for more complex MPs that involve estimation phases, parallel processing can speed up the analysis by a large margin (typically running in 1/(0.8*nthreads) of the time).

For large MSE analyses with many simulations and CMPs, very large clusters can be used with as many as 1000 threads

14.4.1 Designing a somewhat more complex surplus production MP

Many CMPs use numerical optimization to fit models to data. A good example are stock assessments such as surplus production models that are used to provide management advice for many Atlantic tuna and billfish species, for example BSP (McAllister et al. ) and ASPIC.

Here we design a relatively simple three parameter, observation error only, surplus production model. The new SP MP requires two functions:

  1. The MP function (data in, TAC out)
  2. The estimation model (that is used inside the MP function to find parameters that lead to the best fit to historical data)

Similarly to many CMPs, this surplus production requires an index of abundance. This means that the MP will be specific to a particular assessment area and ‘stock’.

In this case we are going to create a CMP for the Western Stock using the GOM_LAR_SUV index. Remember these indices are stored in the @MPind slot of the observation model and also the object ‘Indices’:


unique(Bad_Obs@MPind$Name)

[1] “FR_AER_SUV2” “MED_LAR_SUV” “GOM_LAR_SUV” “GBYP_AER_SUV_BAR” “MOR_POR_TRAP” “JPN_LL_NEAtl2” “US_RR_66_114” “US_RR_115_144” “US_RR_177” “JPN_LL_West2” “CAN GSL” “CAN SWNS” “US_RR_66_144” “MEXUS_GOM_PLL”

Indices
No             Name Stock Quarter Area      Min       Max         Mean       Median First year Last year

1 1 FR_AER_SUV2 1 3 7 0.014 0.136 5.584719e-02 4.670780e-02 2009 2021 14 2 MED_LAR_SUV 1 2 7 1.825 99.899 2.636493e+01 1.873870e+01 2001 2021 35 3 GOM_LAR_SUV 2 2 1 0.158 5.512 1.028363e+00 6.340575e-01 1977 2021 80 4 GBYP_AER_SUV_BAR 1 2 7 1386.637 13418.841 5.991643e+03 4.723028e+03 2010 2021 92 5 MOR_POR_TRAP 1 2 4 63.259 158.075 1.021735e+02 9.866094e+01 2012 2021 102 6 JPN_LL_NEAtl2 1 4 5 2.254 8.882 6.633604e+00 6.824412e+00 2010 2021 114 7 US_RR_66_114 2 3 2 0.340 2.690 9.996154e-01 8.900000e-01 1995 2020 140 8 US_RR_115_144 2 3 2 0.110 2.360 1.000000e+00 8.650000e-01 1995 2020 166 9 US_RR_177 2 3 2 0.279 2.518 9.704141e-01 7.861579e-01 1993 2021 195 10 JPN_LL_West2 2 4 2 0.183 2.160 1.022192e+00 9.273449e-01 2010 2021 207 11 CAN GSL 2 3 3 0.010 0.437 1.493131e-01 9.647727e-02 1988 2021 241 12 CAN SWNS 2 3 2 0.507 2.658 1.512114e+00 1.526137e+00 1996 2021 267 13 US_RR_66_144 2 3 2 0.376 2.128 1.041782e+00 9.029592e-01 1995 2021 294 14 MEXUS_GOM_PLL 2 2 1 0.210 2.240 1.036968e+00 9.148459e-01 1994 2021

Here is a CMP function that processes the data and has a couple of extra bells and whistles for evaluating model fit, diagnosing convergence and also initiating the model at a given depletion level:


SP_i4<-function(x,dset,startD=0.5,checkfit=F){                  # a very simple surplus production model, r, K and q leading
  
  nyears<-dim(dset$Iobs)[3]               # get the number of years of data (the 3rd dimension of Iobs)
  Ind<-dset$Iobs[x,4,1:nyears]            # get the index
  yind<-(1:nyears)[!is.na(Ind)]           # find the years where the index is available (ie not NA)
  
  C_hist <- dset$Cobs[x,yind]             # store catches for this time period
  E_hist <- C_hist/Ind[yind]              # store standardized effort (partial F)
  E_hist <- E_hist/mean(E_hist)           # normalize the effort to mean 1
  E_hist[is.na(E_hist)]<-E_hist[(1:length(E_hist))[match(TRUE,!is.na(E_hist))]] # Impute missing effort
 
  surv<-exp(-cumsum(dset$M[x,]))          # survival at age
  muM<-sum(dset$M[x,]*surv)/sum(surv)     # mean natural mortality rate accounting for survival
  ny <- length(C_hist)                    # record the total number of years of data
  params<-c(muM,sum(C_hist),0.05)         # initial values for r, K and q (Effort covariate has been standardized to mean 1) 
  rprior<-c(muM,0.5)                      # a vague prior on r based on the assumption that FMSY ~ 0.5 x M and FMSY = r/2
  
  opt<-optim(log(params),SP_R,opty=1,
             C_hist=C_hist,E_hist=E_hist,
             rprior=rprior,ny=ny, # run an optimization to fit the data
             startD=startD,                # starting depletion according to function argument above
             method = "L-BFGS-B", 
             lower=log(params/c(3,20,20)), # the first parameter, r, is bounded more tightly as K and q
             upper=log(params*c(3,20,20)), # the greater constraint on r is to prevent chaotic model behavior above 1.3
             hessian = TRUE,               # return a hessian matrix for simple testing of convergence
             control=list(maxit=100))      # optimization can't run for more than 100 iterations

  posdef<-sum(eigen(solve(opt$hessian))$values>0)==3 # is the hessian positive-definite, ie has convergence been achieved?
  
  if(checkfit){                            # Plot fit to catches for model testing
    fit<-SP_R(opt$par,opty=4,C_hist=C_hist,E_hist=E_hist,rprior=rprior,ny=ny,startD=startD); 
    plot(fit[,1],xlab='Model year',ylab="Catches",col='blue',ylim=c(0,max(fit)))
    lines(fit[,2],col='red') 
    legend('topright',legend=c("Observed","Predicted"),text.col=c("blue","red"),bty='n')
    legend('topleft',legend=paste0("Converged: ",posdef),bty='n')
  }

  if(posdef){   # if model converged return new TAC
    SP_R(opt$par,opty=2,C_hist=C_hist,E_hist=E_hist,rprior=rprior,ny=ny,startD=startD)  # opty = 2 returns FMSY x cur biomass
  }else{         # otherwise return previous TAC subject to a 5 percent downward adjustment
    dset$MPrec[x]*0.95
  }  
}
  
class(SP_i4)<-'MP'

The corresponding SP estimation function is:


SP_R<-function(logparams, opty, C_hist, E_hist, rprior,ny,startD){   # simple surplus production model r, K and q leading
  
  r<-exp(logparams[1])                                # Intrinsic rate of increase
  K<-exp(logparams[2])                                # Carrying capacity
  qq<-exp(logparams[3])                               # Catchability (F=qE)
  B<-K*startD                                         # Starting biomass level
  
  Cpred<-rep(NA,ny)                                   # A predicted catch vector
  Bpred<-rep(NA,ny)                                   # A predicted biomass vector
  
  for(i in 1:ny){                                     # loop over years
    Cpred[i]<-B*(1-exp(-qq*E_hist[i]))                # Predicted catches
    B<-B+r*B*(1-B/K)-Cpred[i]                         # update biomass according to SP dynamics
    Bpred[i]<-B                                       # Record biomass
  }
 
  if(opty==1){                                         # return objective function
    
    test<-dnorm(log(Cpred+1E-10),log(C_hist+1E-10),0.2,log=T)      # observed versus predicted log catches
    #test<-dnorm(Cpred,C_hist,0.2*Cpred,log=T)         # observed versus predicted catches
    test2<-dlnorm(r,log(rprior[1]),rprior[2],log=T)    # a weak  lognormal prior on r
    test[is.na(test)|test==(-Inf)]<--1000              # some robustification
    if(is.na(test2)|test2==-Inf|test2==Inf)test2<-1000 # some more robustification
    return(-sum(test,test2))                           # objective function
    
  }else if(opty==2){                                   # return MLE FMSY * current biomass estimate
    
    r/2*Bpred[ny]
    
  }else if(opty==3){
    
    B[ny]/K                                            # return depletion
  
  }else{
    
    cbind(C_hist,Cpred)                                # return observations vs predictions
    
  }

}

As before we can now test this on an example dataset:

 SP_i4(x=1,dset=dset_example_East)

[1] 2768913579

 SP_i4(x=1,dset=dset_example_East,startD=0.5)

[1] 2768913579

Checking the fit to observed catches:

 SP_i4(x=1,dset=dset_example_East,checkfit=T)

[1] 2768913579

Lets make sure this works for all simulations using the regular sapply() function:

 nsim<-nrow(dset_example_East$Cobs)
 sapply(1:nsim,SP_i4,dset_example_East)

[1] 2768913579 2655857562 1465362857 2537366453 3322359744 3164937031 3315154478 3005550840 4074516202 552755149 2600813887 53523950 2887697595 3475351170 678211799 3121447670

Will the MP work in parallel?:

 sfInit(parallel=T,cpus=detectCores())
 sfSapply(1:nsim,SP_i4,dset_example_East)
#> Error in checkForRemoteErrors(val): 16 nodes produced errors; first error: object 'SP_R' not found

You can see that there is an error.

The only issue with our new CMP is that it requires the estimation function SP_R and this is not visible to the cluster so can’t currently be used in parallel processing.

This however is easily solved with the sfExport() function. The correct order of operations is:

 sfInit(parallel=T,cpus=detectCores())
 sfExport("SP_R")
 sfSapply(1:nsim,SP_i4,dset_example_East)

[1] 2768913579 2655857562 1465362857 2537366453 3322359744 3164937031 3315154478 3005550840 4074516202 552755149 2600813887 53523950 2887697595 3475351170 678211799 3121447670

You may have noticed this is faster than the non parallel version by some margin:

 system.time(sapply(1:nsim,SP_i4,dset_example_East))

user system elapsed 0.12 0.00 0.12

 system.time(sfSapply(1:nsim,SP_i4,dset_example_East))

user system elapsed 0.02 0.00 0.35

The more computationally intensive the CMP, the wider the performance margin between parallel and regular linear processing. It is for this reason that it may be worth going the extra step to make your CMP compatible with cluster computing by exporting any internal functions to the cluster.

14.5 Example of designing an index-based MP for Atlantic Bluefin tuna

This document is intended for users who wish to design and test a CMP that sets annual TACs for the East and West management areas using only indices of abundance. This is a critical step in the MSE process for ABFT - do not hesitate to contact Tom Carruthers () for help with CMP development.

In this example we are going to specify a very simple target index CMP, a CMP that will seek a target level by modifying TACs.

There are a number of possible ways to do this but we will use a very basic incremental TAC stepping method.

We need to know a few things including 1) the target level of the relevant index, and 2) the maximum rate of TAC change which we are willing to tolerate.

We can see the names of the various indices available for CMP development by looking at the table object Indices that was loaded when your ran the line loadABT().

Indices
No             Name Stock Quarter Area      Min       Max         Mean       Median First year Last year

1 1 FR_AER_SUV2 1 3 7 0.014 0.136 5.584719e-02 4.670780e-02 2009 2021 14 2 MED_LAR_SUV 1 2 7 1.825 99.899 2.636493e+01 1.873870e+01 2001 2021 35 3 GOM_LAR_SUV 2 2 1 0.158 5.512 1.028363e+00 6.340575e-01 1977 2021 80 4 GBYP_AER_SUV_BAR 1 2 7 1386.637 13418.841 5.991643e+03 4.723028e+03 2010 2021 92 5 MOR_POR_TRAP 1 2 4 63.259 158.075 1.021735e+02 9.866094e+01 2012 2021 102 6 JPN_LL_NEAtl2 1 4 5 2.254 8.882 6.633604e+00 6.824412e+00 2010 2021 114 7 US_RR_66_114 2 3 2 0.340 2.690 9.996154e-01 8.900000e-01 1995 2020 140 8 US_RR_115_144 2 3 2 0.110 2.360 1.000000e+00 8.650000e-01 1995 2020 166 9 US_RR_177 2 3 2 0.279 2.518 9.704141e-01 7.861579e-01 1993 2021 195 10 JPN_LL_West2 2 4 2 0.183 2.160 1.022192e+00 9.273449e-01 2010 2021 207 11 CAN GSL 2 3 3 0.010 0.437 1.493131e-01 9.647727e-02 1988 2021 241 12 CAN SWNS 2 3 2 0.507 2.658 1.512114e+00 1.526137e+00 1996 2021 267 13 US_RR_66_144 2 3 2 0.376 2.128 1.041782e+00 9.029592e-01 1995 2021 294 14 MEXUS_GOM_PLL 2 2 1 0.210 2.240 1.036968e+00 9.148459e-01 1994 2021 You will notice that each index is collected in an area associated with a current stock assessment where Stock = 1 is the East-area and Stock = 2 is the West area.

Let’s say we wished to use the Mediterranean Larval Survey (Index No. 2) to set TACs in the East and the Gulf of Mexico Larval Survey (Index No. 4) to set TACs in the West.

Let’s assume that the current stock level (over the last three years) is around 1.5 times the MSY level in the East (Figure 4 for base reference operating model 1AI of this document) and around 0.5 times the MSY level in the West.

We can extract the current index levels (years 2014 to 2016 are years 50 to 52 since model starts in 1965) for each index from the observation model slot MPind that includes all the index data:

MPind<-Bad_Obs@MPind
IndE<-MPind[MPind$No==2 & MPind$Year%in%(50:55),] # get the Med Larval Survey
IndW<-MPind[MPind$No==3 & MPind$Year%in%(50:55),] # get the GOM Larval Survey

It is now straightforward to determine a possible target level for each index:


TargE=mean(IndE$Index)/1.5
TargW=mean(IndW$Index)/0.5
paste("TargE =",TargE)

[1] “TargE = NA”

paste("TargW =",TargW)

[1] “TargW = 3.19105748633333”

In this simple example, the two index based MPs will alter the current TAC according to the ratio of current index levels (the observed index over the last yrs4mean number of years (default is 4 years) to target levels. We will set the maximum extent of TAC change (Delta) to 15% for both MPs.

MP_E = function(x,dset,Targ=26.1,Deltaup=0.05,Deltadown=0.20,IndexNo=2,yrs4mean=3){

  lastyr = dim(dset$Iobs)[3]                # Most recent year
  datayrs = lastyr-(yrs4mean-1):0           # Position of data for calculating current index
  curI = mean(dset$Iobs[x,IndexNo,datayrs],na.rm=T) # mean of last yrs4mean years
  Irat = curI/Targ                          # Index ratio
  oldTAC = dset$MPrec[x]                    # The last TAC recommendation

  if(Irat<(1-Deltadown)){                       # If index ratio is less than minimum adjustment
    TAC = oldTAC*(1-Deltadown)

  }else if(Irat>(1+Deltaup)){                 # If index ratio is greater than maximum adjustment
    TAC = oldTAC*(1+Deltaup)

  }else{
    TAC = oldTAC*Irat
  }

  TAC                                       # Last thing returned is the TAC recommendation

}

class(MP_E) = "MP"                         # Finally make sure it is of class MP

The West CMP is different only in the first line of function where we have to set an appropriate target level and index number:

MP_W = function(x,dset,Targ=2.55,Deltaup=0.05,Deltadown=0.20,IndexNo=3,yrs4mean=3){

  lastyr = dim(dset$Iobs)[3]                # Most recent year
  datayrs = lastyr-(yrs4mean-1):0           # Position of data for calculating current index
  curI = mean(dset$Iobs[x,IndexNo,datayrs],na.rm=T) # mean of last four years
  Irat = curI/Targ                          # Index ratio
  oldTAC = dset$MPrec[x]                    # The last TAC recommendation

  if(Irat<(1-Deltadown)){                       # If index ratio is less than minimum adjustment
    TAC = oldTAC*(1-Deltadown)

  }else if(Irat>(1+Deltaup)){                 # If index ratio is greater than maximum adjustment
    TAC = oldTAC*(1+Deltaup)

  }else{
    TAC = oldTAC*Irat
  }

  TAC                                       # Last thing returned is the TAC recommendation

}

class(MP_W) = "MP"                         # Finally make sure it is of class MP

We can now test out our new CMPs against simple mean catches and a fixed harvest rate of 5% (U5):


myMPs<-list(c('U5','U5'),
            c('MeanC','MeanC'),
            c('MP_E','MP_W'))

myMSE2<-new('MSE',OM=OM_1,MPs=myMPs)
plot(myMSE2)
getperf(myMSE2)

A standard MSE performance trade-off plot PPlot() is also available that defaults to the 7 performance metrics (class PM) of the trial specifications document.

You can find out more about these PMs using the avail() function and also use PPlot() to produce a performance plot for a smaller number of PMs (or your own custom PMs) if you would prefer:


PPlot(MSE_example)

avail('PM')

[1] “AvC10” “AvC20” “AvC30” “AvgBr” “AvUrel” “Br20” “Br30” “C1” “C10” “C20” “C30” “D10” “D20” “D30” “DNC” “LD” “LDNC” “OFT” “PGK” “PNOF” “PNOS” “PNRK” “POF” “POS” “PrpOF” “RD30” “VarC”

?AvgBr
?PPlot

14.6 One last example of a CMP specification, from start to finish

In just one block of code lets design and test a CMP that uses the spawning index in the East to inform stock depletion.

Assuming surplus production dynamics, this means that FMSY x current biomass is simply depletion x MSY x 2. Lets assume average catches are approximately MSY to complete this MP:


MCD_i2<-function(x,dset,startD=0.5){
  
  nyears<-dim(dset$Iobs)[3]                  # Most recent year
  mean(dset$Cobs[x,],na.rm=T)*               # Average historical catches
    mean(dset$Iobs[x,5,(nyears-2):nyears]/max(dset$Iobs[x,2,],na.rm=T))*   # Mean index over last three years
    2*startD                                 # Adjusted for starting depletion and MSY production at depletion = 0.5
  
}

class(MCD_i2)<-"MP"

sfSapply(1:nsim,MCD_i2,dset_example_East)  

[1] 84797586 113225027 113296520 97338187 85519228 129564560 134343943 119975202 86982985 153507852 151471949 157597165 83535301 53331624 123153388 89225172

14.7 Tips for MPs

  • It is generally a good idea to take measures to ensure your MPs are as robust as possible. For example when taking average catches it could be a good idea to ignore ‘NA’ values - setting the argument na.rm in our mean catch MP:

 MeanC<-function(x,dset) mean(dset$Cobs[x,],na.rm=T)
  • If estimating parameters inside the model, it is best to initialize on the side of low exploitation. For example, initalized at high intrinsic rate of increase (r), low catchability (q) and a carrying capacity (K) twice that of the sum of historical catches, the surplus production CMP we defined above avoids a crashed stock in the first instance (which could lead to very flat gradients in the objective function with respect to the parameters preventing numerical estimation).

  • If your CMP relies on estimation or ‘a good fit’ before it produces a TAC recommendation, design it so that there is a diagnostic and an alternative TAC recommendation if this requirement is not met (e.g. our convergence diagnostic in the surplus production custom CMP above).

  • Remember that the data can include NA values. For example, most of the indices do not go back all the way to the start of the historical simulation. It follows that your CMPs may produce errors if they are not written to extract the parts of the simulated data that are not NA.

14.8 Standardized MSE reporting

New to version 3+ is a standardized MSE report which you can generate for any MSE object


MSE_report(MSE_example,dir="C:/temp", Author='Tom C', introtext="Just a demonstration report", filenam="Example_MSE_report")

Author, introtext and filenam arguments are optional. If dir is not specified then the report is created in the current working directory getwd().

15 Rapid MSE for designing CMPs

The ABTMSE package includes deterministic versions of reference operating models (e.g. OM_1d is the deterministic version of OM_1). These include only two simulations that are identical copies of the maximum posterior density estimates of the fitted operating model.

Using the ‘avail’ function you can list the various OMs and get rapid results for the deterministic version of OM #1.

sfStop() # don't need parallel
avail('OM')

[1] “OM_1” “OM_10” “OM_10d” “OM_10e” “OM_10s” “OM_10t” “OM_11” “OM_11d” “OM_11e” “OM_11s” “OM_11t” “OM_12” “OM_12d” “OM_12e” “OM_12s” “OM_12t” “OM_13” “OM_13d” “OM_13e” “OM_13s” “OM_13t” “OM_14” “OM_14d” “OM_14e” “OM_14s” “OM_14t” “OM_15” “OM_15d” “OM_15e” “OM_15s” “OM_15t” “OM_16” “OM_16d” “OM_16e” “OM_16s” “OM_16t” “OM_17” “OM_17d” “OM_17e” “OM_17s” “OM_17t” “OM_18” “OM_18d” “OM_18e” “OM_18s” “OM_18t” “OM_19” “OM_19d” “OM_19e”
[50] “OM_19s” “OM_19t” “OM_1d” “OM_1e” “OM_1s” “OM_1t” “OM_2” “OM_20” “OM_20d” “OM_20e” “OM_20s” “OM_20t” “OM_21” “OM_21d” “OM_21e” “OM_21s” “OM_21t” “OM_22” “OM_22d” “OM_22e” “OM_22s” “OM_22t” “OM_23” “OM_23d” “OM_23e” “OM_23s” “OM_23t” “OM_24” “OM_24d” “OM_24e” “OM_24s” “OM_24t” “OM_25” “OM_25d” “OM_25e” “OM_25s” “OM_25t” “OM_26” “OM_26d” “OM_26e” “OM_26s” “OM_26t” “OM_27” “OM_27d” “OM_27e” “OM_27s” “OM_27t” “OM_28” “OM_28d”
[99] “OM_28e” “OM_28s” “OM_28t” “OM_29” “OM_29d” “OM_29e” “OM_29s” “OM_29t” “OM_2d” “OM_2e” “OM_2s” “OM_2t” “OM_3” “OM_30” “OM_30d” “OM_30e” “OM_30s” “OM_30t” “OM_31” “OM_31d” “OM_31e” “OM_31s” “OM_31t” “OM_32” “OM_32d” “OM_32e” “OM_32s” “OM_32t” “OM_33” “OM_33d” “OM_33e” “OM_33s” “OM_33t” “OM_34” “OM_34d” “OM_34e” “OM_34s” “OM_34t” “OM_35” “OM_35d” “OM_35e” “OM_35s” “OM_35t” “OM_36” “OM_36d” “OM_36e” “OM_36s” “OM_36t” “OM_37”
[148] “OM_37d” “OM_37e” “OM_37s” “OM_37t” “OM_38” “OM_38d” “OM_38e” “OM_38s” “OM_38t” “OM_39” “OM_39d” “OM_39e” “OM_39s” “OM_39t” “OM_3d” “OM_3e” “OM_3s” “OM_3t” “OM_4” “OM_40” “OM_40d” “OM_40e” “OM_40s” “OM_40t” “OM_41” “OM_41d” “OM_41e” “OM_41s” “OM_41t” “OM_42” “OM_42d” “OM_42e” “OM_42s” “OM_42t” “OM_43” “OM_43d” “OM_43e” “OM_43s” “OM_43t” “OM_44” “OM_44d” “OM_44e” “OM_44s” “OM_44t” “OM_45” “OM_45d” “OM_45e” “OM_45s” “OM_45t”
[197] “OM_46” “OM_46d” “OM_46e” “OM_46s” “OM_46t” “OM_47” “OM_47d” “OM_47e” “OM_47s” “OM_47t” “OM_48” “OM_48d” “OM_48e” “OM_48s” “OM_48t” “OM_4d” “OM_4e” “OM_4s” “OM_4t” “OM_5” “OM_5d” “OM_5e” “OM_5s” “OM_5t” “OM_6” “OM_6d” “OM_6e” “OM_6s” “OM_6t” “OM_7” “OM_7d” “OM_7e” “OM_7s” “OM_7t” “OM_8” “OM_8d” “OM_8e” “OM_8s” “OM_8t” “OM_9” “OM_9d” “OM_9e” “OM_9s” “OM_9t” “OM_example” “ROM_1” “ROM_10” “ROM_10d” “ROM_11”
[246] “ROM_11d” “ROM_12” “ROM_12d” “ROM_13” “ROM_13d” “ROM_14” “ROM_14d” “ROM_15” “ROM_15d” “ROM_16” “ROM_16d” “ROM_17” “ROM_17d” “ROM_18” “ROM_18d” “ROM_19” “ROM_19d” “ROM_1d” “ROM_2” “ROM_20” “ROM_20d” “ROM_21” “ROM_21d” “ROM_22” “ROM_22d” “ROM_23” “ROM_23d” “ROM_24” “ROM_24d” “ROM_25” “ROM_25d” “ROM_26” “ROM_26d” “ROM_27” “ROM_27d” “ROM_28” “ROM_28d” “ROM_29” “ROM_29d” “ROM_2d” “ROM_3” “ROM_30” “ROM_30d” “ROM_31” “ROM_31d” “ROM_32” “ROM_32d” “ROM_33” “ROM_33d”
[295] “ROM_34” “ROM_34d” “ROM_35” “ROM_35d” “ROM_36” “ROM_36d” “ROM_37” “ROM_37d” “ROM_38” “ROM_38d” “ROM_39” “ROM_39d” “ROM_3d” “ROM_4” “ROM_40” “ROM_40d” “ROM_41” “ROM_41d” “ROM_42” “ROM_42d” “ROM_43” “ROM_43d” “ROM_44” “ROM_44d” “ROM_4d” “ROM_5” “ROM_5d” “ROM_6” “ROM_6d” “ROM_7” “ROM_7d” “ROM_8” “ROM_8d” “ROM_9” “ROM_9d”

myMPs<-list( c('MP_E','MP_W') )
testCMP<-new('MSE',OM=OM_1d,MPs=myMPs)

Constructing arrays Calculating historical fishing mortality rate at length (computationally intensive) Initializing simulations Running historical simulations Running projections 1/2 Running MSE for: ZeroC (East) ZeroC (West) ………………………………………………. 2/2 Running MSE for: MP_E (East) MP_W (West) ……………………………………………….

plot(testCMP)

By default, the MSE above ran using the ‘Good_Obs’ observation error model that includes differing scenarios for observation error in future years between the two simulations. If you wish to test MPs with very little observation error, producing almost identical future simulated conditions, then you need to specify the ‘perfect’ observation error model:


avail('Obs')

[1] “Bad_Obs” “Good_Obs” “Hyperstable” “Obs_example” “Perfect_Obs” “Unreported_20”

testCMP2<-new('MSE',OM=OM_1d,MPs=myMPs)

Constructing arrays Calculating historical fishing mortality rate at length (computationally intensive) Initializing simulations Running historical simulations Running projections 1/2 Running MSE for: ZeroC (East) ZeroC (West) ………………………………………………. 2/2 Running MSE for: MP_E (East) MP_W (West) ……………………………………………….

plot(testCMP2)

You will notice that there is now no difference between the two simulations as observation error is essentially zero and does not lead to diverging projections.

16 Tuning

Coming soon.