AUTHOREA
Log in Sign Up Browse Preprints
BROWSE LOG IN SIGN UP

Preprints

Explore 12,628 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

The human experience with intravenous levodopa
Shan H. Siddiqi
Natalia Abrahan

Shan H. Siddiqi

and 5 more

April 24, 2015
ABSTRACT OBJECTIVE: To compile a comprehensive summary of published human experience with levodopa given intravenously, with a focus on information required by regulatory agencies. BACKGROUND: While safe intravenous use of levodopa has been documented for over 50 years, regulatory supervision for pharmaceuticals given by a route other than that approved by the U.S. Food and Drug Administration (FDA) has become increasingly cautious. If delivering a drug by an alternate route raises the risk of adverse events, an investigational new drug (IND) application is required, including a comprehensive review of toxicity data. METHODS: Over 200 articles referring to intravenous levodopa (IVLD) were examined for details of administration, pharmacokinetics, benefit and side effects. RESULTS: We identified 144 original reports describing IVLD use in humans, beginning with psychiatric research in 1959-1960 before the development of peripheral decarboxylase inhibitors. At least 2781 subjects have received IVLD, and reported outcomes include parkinsonian signs, sleep variables, hormones, hemodynamics, CSF amino acid composition, regional cerebral blood flow, cognition, perception and complex behavior. Mean pharmacokinetic variables were summarized for 49 healthy subjects and 190 with Parkinson disease. Side effects were those expected from clinical experience with oral levodopa and dopamine agonists. No articles reported deaths or induction of psychosis. CONCLUSION: At least 2781 patients have received i.v. levodopa with a safety profile comparable to that seen with oral administration.
Orthostatic stability with intravenous levodopa
Shan H. Siddiqi
Mary L. Creech RN, MSW, LCSW

Shan H. Siddiqi

and 2 more

April 20, 2015
Intravenous levodopa has been used in a multitude of research studies due to its more predictable pharmacokinetics compared to the oral form, which is used frequently as a treatment for Parkinson’s disease (PD). Levodopa is the precursor for dopamine, and intravenous dopamine would strongly affect vascular tone, but peripheral decarboxylase inhibitors are intended to block such effects. Pulse and blood pressure, with orthostatic changes, were recorded before and after intravenous levodopa or placebo—after oral carbidopa—in 13 adults with a chronic tic disorder and 16 tic-free adult control subjects. Levodopa caused no statistically or clinically significant changes in blood pressure or pulse. These data add to previous data that support the safety of i.v. levodopa when given with adequate peripheral inhibition of DOPA decarboxylase.
An Atlas of Human Kinase Regulation
David Ochoa
Pedro Beltrao

David Ochoa

and 1 more

April 15, 2015
The coordinated regulation of protein kinases is a rapid mechanism that integrates diverse cues and swiftly determines appropriate cellular responses. However, our understanding of cellular decision-making has been limited by the small number of simultaneously monitored phospho-regulatory events. Here, we have estimated changes in activity in 215 human kinases in 399 conditions derived from a large compilation of phosphopeptide quantifications. This atlas identifies commonly regulated kinases as those that are central in the signaling network and defines the logic relationships between kinase pairs. Co-regulation along the conditions predicts kinase-complex and kinase-substrate associations. Additionally, the kinase regulation profile acts as a molecular fingerprint to identify related and opposing signaling states. Using this atlas, we identified essential mediators of stem cell differentiation, modulators of Salmonella infection and new targets of AKT1. This provides a global view of human phosphorylation-based signaling and the necessary context to better understand kinase driven decision-making.
Stochastic inversion workflow using the gradual deformation in order to predict and m...
Lorenzo Perozzi
Gloaguen

Lorenzo Perozzi

and 2 more

April 07, 2015
ABSTRACT Due to budget constraints, CCS in deep saline aquifers is often carried out using only one injector well and one control well, which seriously limits infering the dynamics of the CO_2 plume. In such case, monitoring of the plume of CO_2 only rely on geological assumptions or indirect data. In this paper, we present a new two-step stochastic P- and S-wave, density and porosity inversion approach that allows reliable monitoring of CO_2 plume using time-lapse VSP. In the first step, we compute several sets of stochastic models of the elastic properties using conventional sequential Gaussian cosimulations. Each realization within a set of static models are then iteratively combined together using a modified gradual deformation optimization technique with the difference between computed and observed raw traces as objective function. In the second step, this statics models serves as input for a CO_2 injection history matching using the same modified gradual deformation scheme. At each gradual deformation step the CO_2 injection is simulated and the corresponding full-wave traces are computed and compared to the observed data. The method has been tested on a synthetic heterogeneous saline aquifer model mimicking the environment of the CO_2 CCS pilot in Becancour area, Quebec. The results show that the set of optimized models of P- and S-wave, density and porosity showed an improved structural similarity with the reference models compared to conventional simulations.
The Resource Identification Initiative: A cultural shift in publishing
Anita Bandrowski
Matthew H. Brush

Anita Bandrowski

and 14 more

March 25, 2015
ABSTRACT A central tenet in support of research reproducibility is the ability to uniquely identify research resources, i.e., reagents, tools, and materials that are used to perform experiments. However, current reporting practices for research resources are insufficient to identify the exact resources that are reported or answer basic questions such as “How did other studies use resource X?”. To address this issue, the Resource Identification Initiative was launched as a pilot project to improve the reporting standards for research resources in the methods sections of papers and thereby improve identifiability and reproducibility. The pilot engaged over 25 biomedical journal editors from most major publishers, as well as scientists and funding officials. Authors were asked to include Research Resource Identifiers (RRIDs) in their manuscripts prior to publication for three resource types: antibodies, model organisms, and tools (i.e. software and databases). RRIDs are assigned by an authoritative database, for example a model organism database, for each type of resource. To make it easier for authors to obtain RRIDs, resources were aggregated from the appropriate databases and their RRIDs made available in a central web portal (scicrunch.org/resources). RRIDs meet three key criteria: they are machine readable, free to generate and access, and are consistent across publishers and journals. The pilot was launched in February of 2014 and over 300 papers have appeared that report RRIDs. The number of journals participating has expanded from the original 25 to more than 40. Here, we present an overview of the pilot project and its outcomes to date. We show that authors are able to identify resources and are supportive of the goals of the project. Identifiability of the resources post-pilot showed a dramatic improvement for all three resource types, suggesting that the project has had a significant impact on reproducibility relating to research resources.
Rapid Environmental Quenching of Satellite Dwarf Galaxies in the Local Group
Andrew Wetzel
Erik Tollerud

Andrew Wetzel

and 2 more

March 06, 2015
In the Local Group, nearly all of the dwarf galaxies ($\mstar\lesssim10^9\msun$) that are satellites within $300\kpc$ (the virial radius) of the Milky Way (MW) and Andromeda (M31) have quiescent star formation and little-to-no cold gas. This contrasts strongly with comparatively isolated dwarf galaxies, which are almost all actively star-forming and gas-rich. This near dichotomy implies a _rapid_ transformation after falling into the halos of the MW or M31. We combine the observed quiescent fractions for satellites of the MW and M31 with the infall times of satellites from the ELVIS suite of cosmological simulations to determine the typical timescales over which environmental processes within the MW/M31 halos remove gas and quench star formation in low-mass satellite galaxies. The quenching timescales for satellites with $\mstar<10^8\msun$ are short, $\lesssim2\gyr$, and decrease at lower $\mstar$. These quenching timescales can be $1-2\gyr$ longer if environmental preprocessing in lower-mass groups prior to MW/M31 infall is important. We compare with timescales for more massive satellites from previous works, exploring satellite quenching across the observable range of $\mstar=10^{3-11}\msun$. The environmental quenching timescale increases rapidly with satellite $\mstar$, peaking at $\approx9.5\gyr$ for $\mstar\sim10^9\msun$, and rapidly decreases at higher $\mstar$ to less than $5\gyr$ at $\mstar>5\times10^9\msun$. Thus, satellites with $\mstar\sim10^9\msun$, similar to the Magellanic Clouds, exhibit the longest environmental quenching timescales.
Ebola virus epidemiology, transmission, and evolution during seven months in Sierra L...
Daniel Park
Gytis Dudas

Daniel Park

and 22 more

March 02, 2015
SUMMARY The 2013-2015 Ebola virus disease (EVD) epidemic is caused by the Makona variant of Ebola virus (EBOV). Early in the epidemic, genome sequencing provided insights into virus evolution and transmission, and offered important information for outbreak response. Here we analyze sequences from 232 patients sampled over 7 months in Sierra Leone, along with 86 previously released genomes from earlier in the epidemic. We confirm sustained human-to-human transmission within Sierra Leone and find no evidence for import or export of EBOV across national borders after its initial introduction. Using high-depth replicate sequencing, we observe both host-to-host transmission and recurrent emergence of intrahost genetic variants. We trace the increasing impact of purifying selection in suppressing the accumulation of nonsynonymous mutations over time. Finally, we note changes in the mucin-like domain of EBOV glycoprotein that merit further investigation. These findings clarify the movement of EBOV within the region and describe viral evolution during prolonged human-to-human transmission.
Top-quark electroweak couplings at the FCC-ee
Patrick Janot
tenchini

Patrick Janot

and 3 more

February 26, 2015
INTRODUCTION The design study of the Future Circular Colliders (FCC) in a 100-km ring in the Geneva area has started at CERN at the beginning of 2014, as an option for post-LHC particle accelerators. The study has an emphasis on proton-proton and electron-positron high-energy frontier machines . In the current plans, the first step of the FCC physics programme would exploit a high-luminosity ${\rm e^+e^-}$ collider called FCC-ee, with centre-of-mass energies ranging from below the Z pole to the ${\rm t\bar t}$ threshold and beyond. A first look at the physics case of the FCC-ee can be found in Ref. . In this first look, the focus regarding top-quark physics was on precision measurements of the top-quark mass, width, and Yukawa coupling through a scan of the ${\rm t\bar t}$ production threshold, with $$ comprised between 340 and 350GeV. The expected precision on the top-quark mass was in turn used, together with the outstanding precisions on the Z peak observables and on the W mass, in a global electroweak fit to set constraints on weakly-coupled new physics up to a scale of 100TeV. Although not studied in the first look, measurements of the top-quark electroweak couplings are of interest, as new physics might also show up via significant deviations of these couplings with respect to their standard-model predictions. Theories in which the top quark and the Higgs boson are composite lead to such deviations. The inclusion of a direct measurement of the ttZ coupling in the global electroweak fit is therefore likely to further constrain these theories. It has been claimed that both a centre-of-mass energy well beyond the top-quark pair production threshold and a large longitudinal polarization of the incoming electron and positron beams are crucially needed to independently access the ttγ and the ttZ couplings for both chirality states of the top quark. In Ref. , it is shown that the measurements of the total event rate and the forward-backward asymmetry of the top quark, with 500${\rm fb}^{-1}$ at $=500$GeV and with beam polarizations of ${\cal P} = \pm 0.8$, ${\cal P}^\prime = \mp 0.3$, allow for this distinction. The aforementioned claim is revisited in the present study. The sensitivity to the top-quark electroweak couplings is estimated here with an optimal-observable analysis of the lepton angular and energy distributions of over a million events from ${\rm t\bar t}$ production at the FCC-ee, in the $\ell \nu {\rm q \bar q b \bar b}$ final states (with $\ell = {\rm e}$ or μ), without incoming beam polarization and with a centre-of-mass energy not significantly above the ${\rm t\bar t}$ production threshold. Such a sensitivity can be understood from the fact that the top-quark polarization arising from its coupling to the Z is maximally transferred to the final state particles via the weak top-quark decay ${\rm t \to W b}$ with a 100% branching fraction: the lack of initial polarization is compensated by the presence of substantial final state polarization, and by a larger integrated luminosity. A similar situation was encountered at LEP, where the measurement of the total rate of ${\rm Z} \to \tau^+\tau^-$ events and of the tau polarization was sufficient to determine the tau couplings to the Z, regardless of initial state polarization . This letter is organized as follows. First, the reader is briefly reminded of the theoretical framework. Next, the statistical analysis of the optimal observables is described, and realistic estimates for the top-quark electroweak coupling sensitivities are obtained as a function of the centre-of-mass energy at the FCC-ee. Finally, the results are discussed, and prospects for further improvements are given.
A new method for identifying the Pacific-South American pattern and its influence on...
Damien Irving
Ian Simmonds

Damien Irving

and 1 more

February 24, 2015
The Pacific-South American (PSA) pattern is an important mode of climate variability in the mid-to-high southern latitudes. It is widely recognized as the primary mechanism by which the El Niño-Southern Oscillation (ENSO) influences the south-east Pacific and south-west Atlantic, and in recent years has also been suggested as a mechanism by which longer-term tropical sea surface temperature trends can influence the Antarctic climate. This study presents a novel methodology for objectively identifying the PSA pattern. By rotating the global coordinate system such that the equator (a great circle) traces the approximate path of the pattern, the identification algorithm utilizes Fourier analysis as opposed to a traditional Empirical Orthogonal Function approach. The climatology arising from the application of this method to ERA-Interim reanalysis data reveals that the PSA pattern has a strong influence on temperature and precipitation variability over West Antarctica and the Antarctic Peninsula, and on sea ice variability in the adjacent Amundsen, Bellingshausen and Weddell Seas. Identified seasonal trends towards the negative phase of the PSA pattern are consistent with warming observed over the Antarctic Peninsula during autumn, but are inconsistent with observed winter warming over West Antarctica. Only a weak relationship is identified between the PSA pattern and ENSO, which suggests that the pattern might be better conceptualized as preferred regional atmospheric response to various external (and internal) forcings.
The spin rate of pre-collapse stellar cores: wave driven angular momentum transport i...
Jim Fuller
Matteo Cantiello

Jim Fuller

and 4 more

February 22, 2015
The core rotation rates of massive stars have a substantial impact on the nature of core collapse supernovae and their compact remnants. We demonstrate that internal gravity waves (IGW), excited via envelope convection during a red supergiant phase or during vigorous late time burning phases, can have a significant impact on the rotation rate of the pre-SN core. In typical (10 M⊙ ≲ M ≲ 20 M⊙) supernova progenitors, IGW may substantially spin down the core, leading to iron core rotation periods $P_{\rm min,Fe} \gtrsim 50 \, {\rm s}$. Angular momentum (AM) conservation during the supernova would entail minimum NS rotation periods of $P_{\rm min,NS} \gtrsim 3 \, {\rm ms}$. In most cases, the combined effects of magnetic torques and IGW AM transport likely lead to substantially longer rotation periods. However, the stochastic influx of AM delivered by IGW during shell burning phases inevitably spin up a slowly rotating stellar core, leading to a maximum possible core rotation period. We estimate maximum iron core rotation periods of $P_{\rm max,Fe} \lesssim 10^4 \, {\rm s}$ in typical core collapse supernova progenitors, and a corresponding spin period of $P_{\rm max, NS} \lesssim 400 \, {\rm ms}$ for newborn neutron stars. This is comparable to the typical birth spin periods of most radio pulsars. Stochastic spin-up via IGW during shell O/Si burning may thus determine the initial rotation rate of most neutron stars. For a given progenitor, this theory predicts a Maxwellian distribution in pre-collapse core rotation frequency that is uncorrelated with the spin of the overlying envelope.
Software Use in Astronomy: An Informal Survey
Ivelina Momcheva
Erik Tollerud

Ivelina Momcheva

and 1 more

February 08, 2015
INTRODUCTION Much of modern Astronomy research depends on software. Digital images and numerical simulations are central to the work of most astronomers today, and anyone who is actively involved in astronomy research has a variety of software techniques in their toolbox. Furthermore, the sheer volume of data has increased dramatically in recent years. The efficient and effective use of large data sets increasingly requires more than rudimentary software skills. Finally, as astronomy moves towards the open code model, propelled by pressure from funding agencies and journals as well as the community itself, readability and reusability of code will become increasingly important (Figure [fig:xkcd]). Yet we know few details about the software practices of astronomers. In this work we aim to gain a greater understanding of the prevalence of software tools, the demographics of their users, and the level of software training in astronomy. The astronomical community has, in the past, provided funding and support for software tools intended for the wider community. Examples of this include the Goddard IDL library (funded by the NASA ADP), IRAF (supported and developed by AURA at NOAO), STSDAS (supported and developed by STScI), and the Starlink suite (funded by PPARC). As the field develops, new tools are required and we need to focus our efforts on ones that will have the widest user base and the lowest barrier to utilization. For example, as our work here shows, the much larger astronomy user base of Python relative to the language R suggests that tools in the former language are likely to get many more users and contributers than the latter. More recently, there has been a growing discussion of the importance of data analysis and software development training in astronomy (e.g., the special sessions at the 225th AAS “Astroinformatics and Astrostatistics in Astronomical Research Steps Towards Better Curricula” and “Licensing Astrophysics Codes”, which were standing room only). Although astronomy and astrophysics went digital long ago, the formal training of astronomy and physics students rarely involves software development or data-intensive analysis techniques. Such skills are increasingly critical in the era of ubiquitous “Big Data” (e.g., , or the 2015 NOAO Big Data conference). Better information on the needs of researchers as well as the current availability of training opportunities (or lack thereof) can be used to inform, motivate and focus future efforts towards improving this aspect of the astronomy curriculum. In 2014 the Software Sustainability Institute carried out an inquiry into the software use of researchers in the UK (, see also the associated presentation). This survey provides useful context for software usage by researchers, as well as a useful definition of “research software”: Software that is used to generate, process or analyze results that you intend to appear in a publication (either in a journal, conference paper, monograph, book or thesis). Research software can be anything from a few lines of code written by yourself, to a professionally developed software package. Software that does not generate, process or analyze results - such as word processing software, or the use of a web search - does not count as ‘research software’ for the purposes of this survey. However, this survey was limited to researchers at UK institutions. More importantly, it was not focused on astronomers, who may have quite different software practices from scientists in other fields. Motivated by these issues and related discussions during the .Astronomy 6 conference, we created a survey to explore software use in astronomy. In this paper, we discuss the methodology of the survey in §[sec:datamethods], the results from the multiple-choice sections in §[sec:res] and the free-form comments in §[sec:comments]. In §[sec:ssicompare] we compare our results to the aforementioned SSI survey and in §[sec:conc] we conclude. We have made the anonymized results of the survey and the code to generate the summary figures available at https://github.com/eteq/software_survey_analysis. This repository may be updated in the future if a significant number of new respondents fill out the survey[1]. [1] http://tinyurl.com/pvyqw59
A minimum standard for publishing computational results in the weather and climate sc...
Damien Irving

Damien Irving

January 14, 2015
Weather and climate science has undergone a computational revolution in recent decades, to the point where all modern research relies heavily on software and code. Despite this profound change in the research methods employed by weather and climate scientists, the reporting of computational results has changed very little in relevant academic journals. This lag has led to something of a reproducibility crisis, whereby it is impossible to replicate and verify most of today’s published computational results. While it is tempting to simply decry the slow response of journals and funding agencies in the face of this crisis, there are very few examples of reproducible weather and climate research upon which to base new communication standards. In an attempt to address this deficiency, this essay describes a procedure for reporting computational results that was employed in a recent _Journal of Climate_ paper. The procedure was developed to be consistent with recommended computational best practices and seeks to minimize the time burden on authors, which has been identified as the most important barrier to publishing code. It should provide a starting point for weather and climate scientists looking to publish reproducible research, and it is proposed that journals could adopt the procedure as a minimum standard.
IEDA EarthChem: Supporting the sample-based geochemistry community with data resource...
Leslie Hsu

Leslie Hsu

December 29, 2014
ABSTRACT Integrated sample-based geochemical measurements enable new scientific discoveries in the Earth sciences. However, integration of geochemical data is difficult because of the variety of sample types and measured properties, idiosyncratic analytical procedures, and the time commitment required for adequate documentation. To support geochemists in integrating and reusing geochemical data, EarthChem, part of IEDA (Integrated Earth Data Applications), develops and maintains a suite of data systems to serve the scientific community. The EarthChem Library focuses on dataset publication, accessibility, and linking with other sources. Topical synthesis databases (e.g., PetDB, SedDB, Geochron) integrate data from several sources and preserve metadata associated with analyzed samples. The EarthChem Portal optimizes data discovery and provides analysis tools. Contributing authors obtain citable DOI identifiers, usage reports of their data, and increased discoverability. The community benefits from open access to data leading to accelerated scientific discoveries. Growing citations of EarthChem systems demonstrate its success.
Parameter estimation on gravitational waves from neutron-star binaries with spinning...
Ben Farr
Christopher P L Berry

Ben Farr

and 16 more

December 11, 2014
INTRODUCTION As we enter the advanced-detector era of ground-based gravitational-wave (GW) astronomy, it is critical that we understand the abilities and limitations of the analyses we are prepared to conduct. Of the many predicted sources of GWs, binary neutron-star (BNS) coalescences are paramount; their progenitors have been directly observed , and the advanced detectors will be sensitive to their GW emission up to ∼400 Mpc away . When analyzing a GW signal from a circularized compact binary merger, strong degeneracies exist between parameters describing the binary (e.g., distance and inclination). To properly estimate any particular parameter(s) of interest, the marginal distribution is estimated by integrating the joint posterior probability density function (PDF) over all other parameters. In this work, we sample the posterior PDF using software implemented in the LALINFERENCE library . Specifically we use results from LALINFERNCE_NEST , a nest sampling algorithm , and LALINFERENCE_MCMC , a Markov-chain Monte Carlo algorithm \citep[chapter 12]{Gregory2005}. Previous studies of BNS signals have largely assessed parameter constraints assuming negligible neutron-star (NS) spin, restricting models to nine parameters. This simplification has largely been due to computational constraints, but the slow spin of NSs in short-period BNS systems observed to date \citep[e.g.,][]{Mandel_2010} has also been used as justification. However, proper characterization of compact binary sources _must_ account for the possibility of non-negligible spin; otherwise parameter estimates will be biased . This bias can potentially lead to incorrect conclusions about source properties and even misidentification of source classes. Numerous studies have looked at the BNS parameter estimation abilities of ground-based GW detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory \citep[aLIGO;][]{Aasi_2015} and Advanced Virgo \citep[AdV;][]{Acernese_2014} detectors. assessed localization abilities on a simulated non-spinning BNS population. looked at several potential advanced-detector networks and quantified the parameter-estimation abilities of each network for a signal from a fiducial BNS with non-spinning NSs. demonstrated the ability to characterize signals from non-spinning BNS sources with waveform models for spinning sources using Bayesian stochastic samplers in the LALINFERENCE library . used approximate methods to quantify the degeneracy between spin and mass estimates, assuming the compact objects’ spins are aligned with the orbital angular momentum of the binary \citep[but see][]{Haster_2015}. simulated a collection of loud signals from non-spinning BNS sources in several mass bins and quantified parameter estimation capabilities in the advanced-detector era using non-spinning models. introduced precession from spin–orbit coupling and found that the additional richness encoded in the waveform could reduce the mass–spin degeneracy, helping BNSs to be distinguished from NS–black hole (BH) binaries. conducted a similar analysis of a large catalog of sources and found that it is difficult to infer the presence of a mass gap between NSs and BHs , although, this may still be possible using a population of a few tens of detections . Finally, and the follow-on represent an (almost) complete end-to-end simulation of BNS detection and characterization during the first 1–2 years of the advanced-detector era. These studies simulated GWs from an astrophysically motivated BNS population, then detected and characterized sources using the search and follow-up tools that are used for LIGO–Virgo data analysis . The final stage of the analysis missing from these studies is the computationally expensive characterization of sources while accounting for the compact objects’ spins and their degeneracies with other parameters. The present work is the final step of BNS characterization for the simulations using waveforms that account for the effects of NS spin. We begin with a brief introduction to the source catalog used for this study and in section [sec:sources]. Then, in section [sec:spin] we describe the results of parameter estimation from a full analysis that includes spin. In section [sec:mass] we look at mass estimates in more detail and spin-magnitude estimates in section [sec:spin-magnitudes]. In section [sec:extrinsic] we consider the estimation of extrinsic parameters: sky position (section [sec:sky]) and distance (section [sec:distance]), which we do not expect to be significantly affected by the inclusion of spin in the analysis templates. We summarize our findings in section [sec:conclusions]. A comparison of computational costs for spinning and non-spinning parameter estimation is given in appendix [ap:CPU].
A novel approach to diagnosing Southern Hemisphere planetary wave activity and its in...
Damien Irving
Ian Simmonds

Damien Irving

and 1 more

November 16, 2014
Southern Hemisphere mid-to-upper tropospheric planetary wave activity is characterized by the superposition of two zonally-oriented, quasi-stationary waveforms: zonal wavenumber one (ZW1) and zonal wavenumber three (ZW3). Previous studies have tended to consider these waveforms in isolation and with the exception of those studies relating to sea ice, little is known about their impact on regional climate variability. We take a novel approach to quantifying the combined influence of ZW1 and ZW3, using the strength of the hemispheric meridional flow as a proxy for zonal wave activity. Our methodology adapts the wave envelope construct routinely used in the identification of synoptic-scale Rossby wave packets and improves on existing approaches by allowing for variations in both wave phase and amplitude. While ZW1 and ZW3 are both prominent features of the climatological circulation, the defining feature of highly meridional hemispheric states is an enhancement of the ZW3 component. Composites of the mean surface conditions during these highly meridional, ZW3-like anomalous states (i.e. months of strong planetary wave activity) reveal large sea ice anomalies over the Amundsen and Bellingshausen Seas during autumn and along much of the East Antarctic coastline throughout the year. Large precipitation anomalies in regions of significant topography (e.g. New Zealand, Patagonia, coastal Antarctica) and anomalously warm temperatures over much of the Antarctic continent were also associated with strong planetary wave activity. The latter has potentially important implications for the interpretation of recent warming over West Antarctica and the Antarctic Peninsula.
Satellite Dwarf Galaxies in a Hierarchical Universe: Infall Histories, Group Preproce...
Andrew Wetzel
Alis Deason

Andrew Wetzel

and 2 more

October 31, 2014
In the Local Group, almost all satellite dwarf galaxies that are within the virial radius of the Milky Way (MW) and M31 exhibit strong environmental influence. The orbital histories of these satellites provide the key to understanding the role of the MW/M31 halo, lower-mass groups, and cosmic reionization on the evolution of dwarf galaxies. We examine the virial-infall histories of satellites with $\mstar=10^{3-9} \msun$ using the ELVIS suite of cosmological zoom-in dissipationless simulations of 48 MW/M31-like halos. Satellites at z = 0 fell into the MW/M31 halos typically $5-8 \gyr$ ago at z = 0.5 − 1. However, they first fell into any host halo typically $7-10 \gyr$ ago at z = 0.7 − 1.5. This difference arises because many satellites experienced “group preprocessing” in another host halo, typically of $\mvir \sim 10^{10-12} \msun$, before falling into the MW/M31 halos. Satellites with lower-mass and/or those closer to the MW/M31 fell in earlier and are more likely to have experienced group preprocessing; half of all satellites with $\mstar < 10^6 \msun$ were preprocessed in a group. Infalling groups also drive most satellite-satellite mergers within the MW/M31 halos. Finally, _none_ of the surviving satellites at z = 0 were within the virial radius of their MW/M31 halo during reionization (z > 6), and only <4% were satellites of any other host halo during reionization. Thus, effects of cosmic reionization versus host-halo environment on the formation histories of surviving dwarf galaxies in the Local Group occurred at distinct epochs and are separable in time.
Distinguishing disorder from order in irreversible decay processes
Jonathan Nichols
Shane Flynn

Jonathan Nichols

and 2 more

August 25, 2014
Fluctuating rate coefficients are necessary when modeling disordered kinetic processes with mass-action rate equations. However, measuring the fluctuations of rate coefficients is a challenge, particularly for nonlinear rate equations. Here we present a measure of the total disorder in irreversible decay i A → products, i = 1, 2, 3, …n governed by (non)linear rate equations – the inequality between the time-integrated square of the rate coefficient (multiplied by the time interval of interest) and the square of the time-integrated rate coefficient. We apply the inequality to empirical models for statically and dynamically disordered kinetics with i ≥ 2. These models serve to demonstrate that the inequality quantifies the cumulative variations in a rate coefficient, and the equality is a bound only satisfied when the rate coefficients are constant in time.
Real-space grids and the Octopus code as tools for the development of new simulation...
Xavier Andrade
David A. Strubbe

Xavier Andrade

and 15 more

August 18, 2014
Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schrödinger equation for low-dimensionality systems.
The "Paper" of the Future
Alyssa Goodman
Josh Peek

Alyssa Goodman

and 10 more

August 02, 2014
_A 5-minute video demonstration of this paper is available at this YouTube link._ PREAMBLE A variety of research on human cognition demonstrates that humans learn and communicate best when more than one processing system (e.g. visual, auditory, touch) is used. And, related research also shows that, no matter how technical the material, most humans also retain and process information best when they can put a narrative "story" to it. So, when considering the future of scholarly communication, we should be careful not to do blithely away with the linear narrative format that articles and books have followed for centuries: instead, we should enrich it. Much more than text is used to communicate in Science. Figures, which include images, diagrams, graphs, charts, and more, have enriched scholarly articles since the time of Galileo, and ever-growing volumes of data underpin most scientific papers. When scientists communicate face-to-face, as in talks or small discussions, these figures are often the focus of the conversation. In the best discussions, scientists have the ability to manipulate the figures, and to access underlying data, in real-time, so as to test out various what-if scenarios, and to explain findings more clearly. THIS SHORT ARTICLE EXPLAINS—AND SHOWS WITH DEMONSTRATIONS—HOW SCHOLARLY "PAPERS" CAN MORPH INTO LONG-LASTING RICH RECORDS OF SCIENTIFIC DISCOURSE, enriched with deep data and code linkages, interactive figures, audio, video, and commenting.
Compressed Sensing for the Fast Computation of Matrices: Application to Molecular Vib...
Jacob Sanders
Xavier Andrade

Jacob Sanders

and 2 more

July 11, 2014
This article presents a new method to compute matrices from numerical simulations based on the ideas of sparse sampling and compressed sensing. The method is useful for problems where the determination of the entries of a matrix constitutes the computational bottleneck. We apply this new method to an important problem in computational chemistry: the determination of molecular vibrations from electronic structure calculations, where our results show that the overall scaling of the procedure can be improved in some cases. Moreover, our method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations, resulting in a significant 3\(\times\) speed-up in actual calculations.
Large-Scale Microscopic Traffic Behaviour and Safety Analysis of Québec Roundabout De...
Paul St-Aubin
Nicolas Saunier

Paul St-Aubin

and 2 more

July 08, 2014
INTRODUCTION Roundabouts are a relatively new design for intersection traffic management in North America. With great promises from abroad in terms of safety, as well as capacity—roundabouts are a staple of European road design—roundabouts have only recently proliferated in parts of North America, including the province of Québec. However, questions still remain regarding the feasibility of introducing the roundabout to regions where driving culture and road design philosophy differ and where drivers are not habituated to their use. This aspect of road user behaviour integration is crucial for their implementation, for roundabouts manage traffic conflicts passively. In roundabouts, road user interactions and driving conflicts are handled entirely by way of driving etiquette between road users: lane merging, right-of-way, yielding behaviour, and eye contact in the case of vulnerable road users are all at play for successful passage negotiation at a roundabout. This is in contrast with typical North American intersections managed by computer-controlled traffic-light controllers (or on occasion police officers) and traffic circles of all kinds which are also signalized. And while roundabouts share much in common with 4 and 2-way stops, they are frequently used for high-capacity, even high-speed, intersections where 4 and 2-way stops would normally not be justified. Resistance to adoption in some areas is still important, notably on the part of vulnerable road users such as pedestrians and cyclists but also by some drivers too. While a number of European studies cite reductions in accident probability and accident severity, particularly for the Netherlands , Denmark , and Sweden , research on roundabouts in North America is still limited, and even fewer attempts at microscopic behaviour analysis exist anywhere in the world. The latter is important because it provides insight over the inner mechanics of driving behaviour which might be key to tailoring roundabout design for regional adoption and implementation efforts. Fortunately, more systematic and data-rich analysis techniques are being made available today. This paper proposes the application of a novel, video-based, semi-automated trajectory analysis approach for large-scale microscopic behavioural analysis of 20 of 100 available roundabouts in Québec, investigating 37 different roundabout weaving zones. The objectives of this paper are to explore the impact of Québec roundabout design characteristics, their geometry and built environment on driver behaviour and safety through microscopic, video-based trajectory analysis. Driver behaviour is characterized by merging speed and time-to-collision , a maturing indicator of surrogate safety and behaviour analysis in the field of transportation safety. In addition, this work represents one of the largest applications of surrogate safety analysis to date.
Comparison of Various Time-to-Collision Prediction and Aggregation Methods for Surrog...
Paul St-Aubin
Luis Miranda-Moreno

Paul St-Aubin

and 2 more

July 08, 2014
INTRODUCTION Traditional methods of road safety analysis rely on direct road accident observations, data sources which are rare and expensive to collect and which also carry the social cost of placing citizens at risk of unknown danger. Surrogate safety analysis is a growing discipline in the field of road safety analysis that promises a more pro-active approach to road safety diagnosis. This methodology uses non-crash traffic events and measures thereof as predictors of collision probability and severity as they are significantly more frequent, cheaper to collect, and have no social impact. Time-to-collision (TTC) is an example of an indicator that indicates collision probability primarily: the smaller the TTC, the less likely drivers have time to perceive and react before a collision, and thus the higher the probability of a collision outcome. Relative positions and velocities between road users or between a user and obstacles can be characterised by a collision course and the corresponding TTC. Meanwhile, driving speed (absolute speed) is an example of an indicator that measures primarily collision severity. The higher the travelling speed, the more stored kinetic energy is dissipated during a collision impact . Similarly, large speed differentials between road users or with stationary obstacles may also contribute to collision severity, though the TTC depends on relative distance as well. Driving speed is used extensively in stopping-sight distance models , some even suggesting that drivers modulate their emergency braking in response to travel speed . Others content that there is little empirical evidence of a relationship between speed and collision probability . Many surrogate safety methods have been used in the literature, especially recently with the renewal of automated data collection methods, but consistency in the definitions of traffic events and indicators, in their interpretation, and in the transferability of results is still lacking. While a wide diversity of models demonstrates that research in the field is thriving, there remains a need of comparison of the methods and even a methodology for comparison in order to make surrogate safety practical for practitioners. For example, time-to-collision measures collision course events, but the definition of a collision course lacks rigour in the literature. Also lacking is some systematic validation of the different techniques. Some early attempts have been made with the Swedish Traffic Conflict Technique  using trained observers, though more recent attempts across different methodologies, preferably automated and objectively-defined measures, are still needed. Ideally, this would be done with respect to crash data and crash-based safety diagnosis. The second best method is to compare the characteristics of all the methods and their results on the same data set, but public benchmark data is also very limited despite recent efforts . The objectives of this paper are to review the definition and interpretation of one of the most ubiquitous and least context-sensitive surrogate safety indicators, namely time-to-collision, for surrogate safety analysis using i) consistent, recent, and, most importantly, objective definitions of surrogate safety indicators, ii) a very large data set across numerous sites, and iii) the latest developments in automated analysis. This work examines the use of various motion prediction methods, constant velocity, normal adaptation and observed motion patterns, for the TTC safety indicator (for its properties of transferability), and space and time aggregation methods for continuous surrogate safety indicators. This represents an application of surrogate safety analysis to one of the largest data sets to date.
The Fork Factor: an academic impact factor based on reuse.
Ferdinando Pucci
Alberto Pepe

Ferdinando Pucci

and 1 more

July 06, 2014
HOW IS ACADEMIC RESEARCH EVALUATED? There are many different ways to determine the impact of scientific research. One of the oldest and best established measures is to look at the Impact Factor (IF) of the academic journal where the research has been published. The IF is simply the average number of citations to recent articles published in such an academic journal. The IF is important because the reputation of a journal is also used as a proxy to evaluate the relevance of past research performed by a scientist when s/he is applying to a new position or for funding. So, if you are a scientist who publishes in high-impact journals (the big names) you are more likely to get tenure or a research grant. Several criticisms have been made to the use and misuse of the IF. One of these is the policies that academic journal editors adopt to boost the IF of their journal (and get more ads), to the detriment of readers, writers and science at large. Unfortunately, these policies promote the publication of sensational claims by researchers who are in turn rewarded by funding agencies for publishing in high IF journals. This effect is broadly recognized by the scientific community and represents a conflict of interests, that in the long run increases public distrust in published data and slows down scientific discoveries. Scientific discoveries should instead foster new findings through the sharing of high quality scientific data, which feeds back into increasing the pace of scientific breakthroughs. It is apparent that the IF is a crucially deviated player in this situation. To resolve the conflict of interest, it is thus fundamental that funding agents (a major driving force in science) start complementing the IF with a better proxy for the relevance of publishing venues and, in turn, scientists’ work. RESEARCH IMPACT IN THE ERA OF FORKING. A number of alternative metrics for evaluating academic impact are emerging. These include metrics to give scholars credit for sharing of raw science (like datasets and code), semantic publishing, and social media contribution, based not solely on citation but also on usage, social bookmarking, conversations. We, at Authorea, strongly believe that these alternative metrics should and will be a fundamental ingredient of how scholars are evaluated for funding in the future. In fact, Authorea already welcomes data, code, and raw science materials alongside its articles, and is built on an infrastructure (Git) that naturally poses as a framework for distributing, versioning, and tracking those materials. Git is a versioning control platform currently employed by developers for collaborating on source code, and its features perfectly fit the needs of most scientists as well. A versioning system, such as Authorea and GitHub, empowers FORKING of peer-reviewed research data, allowing a colleague of yours to further develop it in a new direction. Forking inherits the history of the work and preserves the value chain of science (i.e., who did what). In other words, forking in science means _standing on the shoulder of giants_ (or soon to be giants) and is equivalent to citing someone else’s work but in a functional manner. Whether it is a “negative” result (we like to call it non-confirmatory result) or not, publishing your peer reviewed research in Authorea will promote forking of your data. (To learn how we plan to implement peer review in the system, please stay tuned for future posts on this blog.) MORE FORKING, MORE IMPACT, HIGHER QUALITY SCIENCE. Obviously, the more of your research data are published, the higher are your chances that they will be forked and used as a basis for groundbreaking work, and in turn, the higher the interest in your work and your academic impact. Whether your projects are data-driven peer reviewed articles on Authorea discussing a new finding, raw datasets detailing some novel findings on Zenodo or Figshare, source code repositories hosted on Github presenting a new statistical package, every bit of your work that can be reused, will be forked and will give you credit. Do you want to do a favor to science? Publish also non-confirmatory results and help your scientific community to quickly spot bad science by publishing a dead end fork (Figure 1).
The effect of carbon subsidies on marine planktonic niche partitioning and recruitmen...
Charles Pepe-Ranney
Ed Hall

Charles Pepe-Ranney

and 1 more

June 16, 2014
INTRODUCTION Biofilms are diverse and complex microbial consortia, and, the biofilm lifestyle is the rule rather than the exception for microbes in many environments. Large and small-scale biofilm architectural features play an important role in their ecology and influence their role in biogeochemical cycles . Fluid mechanics impact biofilm structure and assembly , but it is less clear how other abiotic factors such as resource availability affect biofilm assembly. Aquatic biofilms initiate with seed propagules from the planktonic community . Thus, resource amendments that influence planktonic communities may also influence the recruitment of microbial populations during biofilm community assembly. In a crude sense, biofilm and planktonic microbial communities divide into two key groups: oxygenic phototrophs including eukaryotes and cyanobacteria (hereafter “photoautotrophs”), and heterotrophic bacteria and archaea. This dichotomy, admittedly an abstraction (e.g. non-phototrophs can also be autotrophs), can be a powerful paradigm for understanding community shifts across ecosystems of varying trophic state . Heterotrophs meet some to all of their organic carbon (C) requirements from photoautotroph produced C while simultaneously competing with photoautotrophs for limiting nutrients such as phosphorous (P) . The presence of external C inputs, such as terrigenous C leaching from the watershed or C exudates derived from macrophytes , can alleviate heterotroph reliance on photoautotroph derived C and shift the heterotroph-photoautotroph relationship from commensal and competitive to strictly competitive . Therefore, increased C supply should increase the resource space available to heterotrophs and increase competition for mineral nutrients decreasing nutrients available for photoautotrophs (assuming that heterotrophs are superior competitors for limiting nutrients as has been observed ). These dynamics should result in the increase in heterotroph biomass relative to the photoautotroph biomass along a gradient of increasing labile C inputs. We refer to this differential allocation of limiting resources among components of the microbial community as niche partitioning. While these gross level dynamics have been discussed conceptually and to some extent demonstrated empirically , the effects of biomass dynamics on photoautotroph and heterotroph membership and structure has not been directly evaluated in plankton or biofilms. In addition, how changes in planktonic communities propagate to biofilms during community assembly is not well understood. We designed this study to test if C subsidies shift the biomass balance between autotrophs and heterotrophs within the biofilm or its seed pool (i.e. the plankton), and, to measure how changes in biomass pool size alter composition of the plankton and biofilm communities. Specifically, we amended marine mesocosms with varying levels of labile C input and evaluated differences in photoautotroph and heterotrophic bacterial biomass in plankton and biofilm samples along the C gradient. In each treatment we characterized plankton and biofilm community composition by PCR amplifying and DNA sequencing 16S rRNA genes and plastid 23S rRNA genes.
← Previous 1 2 … 519 520 521 522 523 524 525 526 527 Next →
Authorea
  • Home
  • About
  • Product
  • Preprints
  • Pricing
  • Blog
  • Twitter
  • Help
  • Terms of Use
  • Privacy Policy