content
stringlengths 91
663k
| score
float64 0.5
1
| source
stringclasses 1
value |
|---|---|---|
Life cycle assessment of permanent magnet electric traction motors
Journal article, 2019
Ongoing development of electrified road vehicles entails a risk of conflict between resource issues and the reduction of greenhouse gas emissions. In this study, the environmental impact of the core design and magnet material for three electric vehicle traction motors was explored with life cycle assessment (LCA): two permanent magnet synchronous machines with neodymium-dysprosium-iron-boron or samarium-cobalt magnets, and a permanent magnet-assisted synchronous reluctance machine (PM-assisted SynRM) with strontium-ferrite magnets. These combinations of motor types and magnets, although highly relevant for vehicles, are new subjects for LCA. The study included substantial data compilation, machine design and drive-cycle calculations. All motors handle equal take-off, top speed, and driving conditions. The production (except of magnets) and use phases are modeled for two countries – Sweden and the USA – to exemplify the effects of different electricity supply. Impacts on climate change and human toxicity were found to be most important. Complete manufacturing range within 1.7–2.0 g CO2-eq./km for all options. The PM-assisted SynRM has the highest efficiency and lowest emissions of CO2. Copper production is significant for toxicity impacts and effects on human health, with problematic emissions from mining. Resource depletion results are divergent depending on evaluation method, but a sensitivity analysis proved other results to be robust. Key motor design targets are identified: high energy efficiency, slender housings, compact end-windings, segmented laminates to reduce production scrap, and easy disassembly.
Life cycle assessment (LCA) Magnet Electric motor Neodymium Samarium Ferrite
| 0.9378
|
FineWeb
|
Scientists in Germany recently published a study in which they took a new approach to analyzing nanofiltration membranes. They used a methodology called “Thinking in terms of Structure-Activity-Relationships” (AKA T-SAR) that was first introduced in 2003 to determine the properties and the effects of different substance classes on biological systems. T-SAR was applied here to see if it could provide them with a better understanding of the NF membrane as well as predict the membrane’s performance for the recovery of ionic fluids.
T-SAR analysis makes it possible to analyze a chemical compound using only its three-dimensional chemical structure, but the process is made more difficult and complex as the size of the molecule increases. This characteristic of T-SAR creates a problem for NF materials. In order to overcome it, the researchers combined T-SAR methods with traditional membrane characterization procedures to gather more conclusive evidence on the importance of chemical structure for separation performance. The algorithm to conduct the T-SAR analysis of a chemical compound includes 17 steps in the areas of: Chemical Structure, Stereochemistry, Molecular Interaction Potentials, and Reactivity.
The materials involved for this experiment included two NF polyamide membranes (FilmTec NF-90 and NF-270) and three ionic liquids. In order to prep these membranes for T-SAR analysis, they were first subjected to some baseline analysis such as confirming their composition through spectroscopy and determining their pure water capability with an HP4750 stirred cell. The ionic fluids were tempered with deionized water to reduce the influence of additional ions and then cycled through the HP4750 to make samples of the feed, retentate, and permeate for ion-chromatography analysis.
After this preparation and traditional analysis, the materials were then subject to the full T-SAR analysis procedure to determine if it really can be used to understand NF membranes and predict their performance. You’ll have to look at the full report for all of the detailed results of the T-SAR analysis.
After all this work, the authors concluded that, “the experimental values obtained for the filtration of such ionic liquids are in good agreement with the predictions.” So it looks like T-SAR methodology might be used more often in NF membrane experiments! Sehr gut!
Read the complete report here.
| 0.9245
|
FineWeb
|
Next-to-next-to-leading-order Collinear and Soft Gluon Corrections for T-channel Single Top Quark Production
I present the resummation of collinear and soft-gluon corrections to single top quark production in the t channel at next-to-next-to-leading logarithm accuracy using two-loop soft anomalous dimensions. The expansion of the resummed cross section yields approximate next-to-next-to-leading-order cross sections. Numerical results for t-channel single top quark (or single antitop) production at the Tevatron and the LHC are presented, including the dependence of the cross sections on the top quark mass and the uncertainties from scale variation and parton distributions. Combined results for all single top quark production channels are also given.
| 0.5659
|
FineWeb
|
Sports anemia refers to a period in early training when athletes may develop low blood hemoglobin for a while, and likely reflects a normal adaptation to physical training.
Aerobic training enlarges the blood volume and, with the added fluid, the red blood cell count per unit of blood drops. While true anemia requires treatment, the temporary reduced red blood cell count seen early in training goes away by itself after a time.
Physically active young women, especially those who engage in such endurance activities as distance running, are prone to iron deficiency. Research studies show that as many as 45% of female runners of high school age have low iron stores.
Iron status may be affected by exercise in a number of ways. One possibility is that iron is lost in sweat and, although the sweat of trained athletes contains less iron than the sweat of others, it is usually simply an adaptation to conditioning. Still, athletes sweat more copiously than sedentary people.
Another possible route to iron loss is red blood cell destruction is that blood cells are squashed when body tissues (as the soles of the feet) make high-impact contact with an unyielding surface (such as the ground). In addition, in some athletes at least, physical activity may cause small blood losses through the digestive tract.
Thirdly, the habitually low intake of iron-rich foods, combined with iron losses aggravated by physical activity, leads to iron deficiency in physically active individuals.
Iron deficiency impairs physical performance because iron is crucial to the body’s handling of oxygen. Since one consequence of iron deficiency anemia is impaired oxygen transport, aerobic work capacity is going to be reduced because the person is likely to tire very easily. Whether marginal deficiency without anemia impairs physical performance remains a point of continual debate among researchers.
Physical activity can also produce a hemolytic anemia caused by repetitive blows to the surfaces of the body. This condition was first noticed in soldiers after long forced marches (march hemoglobinuria). Today, it is more often seen in long-distance runners since soldiers are now better equipped with protective foot gear. March hemoglobinuria can also result from repeated blows to other body parts, and has been observed in martial arts and players of conga and bongo drums.
| 0.9555
|
FineWeb
|
F# Minor 7th Piano Chord
The Notes in an F# Minor 7th Chord
The root is the bottom note of the chord, the starting point to which the other notes relate. The root of an F# Minor 7th chord is F#.
The Min 3rd
The minor third of an F# Minor 7th chord is A. The minor third is up three half-steps from the Root.
Finding A from F# step by step:
- Start on: F#
- Step 1: move up to G
- Step 2: move up to G#
- Step 3: Land on A
- G is a minor second above F#.
- G# is a major 2nd above F#.
- A is a minor third above F#.
The Min 7th
The minor seventh of an F# Minor 7th chord is E. The minor seventh is down two half-steps from the Root.
Finding E from F# step by step:
- Start on: F#
- Step 1: move down to E#
- Step 2: Land on E
- E# is a minor second below F#.
- E is a major 2nd below F#. The min 7th is down a major 2nd? Confusing, right? The note E is down 2 half-steps from F#, but up 10 half-steps from F#:
The Inversions of F# Minor 7th
How to find F# Minor 7th with my three-finger-method
This is the method taught in my book "How to Speed Read Piano Chord Symbols"
Step 1) Use the Fourth
Find the Root and the Fourth up from the Root. (See my tutorial on finding fourths).
Step 2) Move the right hand down
Move both fingers in right hand a whole-step down (two keys to the left on the piano). (A half-step is the next key.)
How to Find 7th Chords with Nate's Three Finger Method
- Major 7th chords: bring both fingers down a half-step
- Minor 7th chords: bring both fingers down a whole-step
- Dominant 7th chords: bring the Root down a whole-step, the fourth down a half-step
- Diminished 7th chords: bring the Root down a minor third, the fourth down a whole-step
If you would like to learn more about my method, pick up "How to Speed Read Piano Chord Symbols".
| 0.9891
|
FineWeb
|
Problem: Find the largest size set of edges S \in E such that each vertex in V is incident to at most one edge of S.
Excerpt from The Algorithm Design Manual: Consider a set of employees, each of whom is capable of doing some subset of the tasks that must be performed. We seek to find an assignment of employees to tasks such that each task is assigned to a unique employee. Each mapping between an employee and a task they can handle defines an edge, so what we need is a set of edges with no employee or job in common, i.e. a matching.
Efficient algorithms for constructing matchings are based on constructing augmenting paths in graphs. Given a (partial) matching M in a graph G, an augmenting path P is a path of edges where every odd-numbered edge (including the first and last edge) is not in M, while every even-numbered edge is. Further, the first and last vertices must not be already in M. By deleting the even-numbered edges of P from M and replacing them with the odd-numbered edges of P, we enlarge the size of the matching by one edge. Berge's theorem states that a matching is maximum if and only if it does not contain any augmenting path. Therefore, we can construct maximum-cardinality matchings by searching for augmenting paths and stopping when none exist.
|Algorithms in Java, Third Edition (Parts 1-4) by Robert Sedgewick and Michael Schidlowsky||Network Flows : Theory, Algorithms, and Applications by Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin||Computational Discrete Mathematics: Combinatorics and Graph Theory with Mathematica by S. Pemmaraju and S. Skiena|
|Introduction to Algorithms by T. Cormen and C. Leiserson and R. Rivest and C. Stein||The Stable Marriage Problem: structure and algorithms by D. Gusfield and R. Irving||Introduction to Algorithms by U. Manber|
|Matching Theory by L. Lovasz||Data Structures and Network Algorithms by R. Tarjan||Combinatorial Optimization: Algorithms and Complexity by C. Papadimitriou and K. Steiglitz|
| 0.7436
|
FineWeb
|
The BehaviorType is one of the foundational MAEC types, and serves as a method for the characterization of malicious behaviors found or observed in malware. Behaviors can be thought of as representing the purpose behind groups of MAEC Actions, and are therefore representative of distinct portions of higher-level malware functionality. Thus, while a malware instance may perform some multitude of Actions, it is likely that these Actions represent only a few distinct behaviors. Some examples include vulnerability exploitation, email address harvesting, the disabling of a security service, etc.
The required id field specifies a unique ID for this Behavior.
The ordinal_position field specifies the ordinal position of the Behavior with respect to the execution of the malware.
The status field specifies the execution status of the Behavior being characterized.
The duration field specifies the duration of the Behavior. One way to derive such a value may be to calculate the difference between the timestamps of the first and last actions that compose the behavior.
The Purpose field specifies the intended purpose of the Behavior. Since a Behavior is not always successful, and may not be fully observed, this is meant as way to state the nature of the Behavior apart from its constituent actions.
The Description field specifies a prose textual description of the Behavior.
The Discovery_Method field specifies the method used to discover the Behavior.
The Action_Composition field captures the Actions that compose the Behavior.
The Associated_Code field specifies any code snippets that may be associated with the Behavior.
The Relationships field specifies any relationships between this Behavior and any other Behaviors.
| 0.8862
|
FineWeb
|
DR. MARTIN LUTHER KING, JR.
50 years ago, Martin Luther King Jr. gave one of the most famous and influential speeches in American history. The “I Have a Dream” speech was effective not just for its words, but also for Dr. King’s impassioned delivery. It represented the feelings of millions of people fighting for civil liberties. The speech, given by a lesser man in a lesser setting may not have earned the same attention. Dr. King knew if he were to truly help bring about change, he would need a speech and setting that would inspire. The March on Washington and “I Have a Dream” speech caught the attention of a nation, and brought it closer to the much-needed change.
eSpeakers believes in the power of great speeches like the “I Have a Dream” speech, and in great speakers like Dr. Martin Luther King, Jr. To honor his speech given 50 years ago, eSpeakers has created an infographic in commemoration of that great moment in American history. You can view the infographic below.
Click this link to see the full inspiring infographic:
Celebrating 50 years of the “I Have A Dream Speech” Infographic
To find great and inspiring speakers for your own event, consider searching eSpeakers Marketplace.
| 0.8576
|
FineWeb
|
As part of our Sonic Kayak project, we have been looking at adding new sensors to the system. These are our notes from our research and prototyping.
Since we started the Sonic Kayak project, a few people have asked us whether we could add a turbidity sensor – they were interested in using it to monitor algal blooms in an EcoPort, monitor cyanobacteria for a water company, and taking water quality readings for seaweed farming.
Turbidity sensors give a measurement of the amount of suspended solids in water – the more suspended solids, the higher the turbidity level (cloudiness) of the water. The most basic approach to working out water turbidity is to use something called a Secchi disk. These are plain white or black and white circular disks that are lowered slowly into the water, and the depth at which the disk is no longer visible is a rough measure of the cloudiness of the water. This is a great low-key approach, but the result is greatly affected by other factors such as the amount of daylight. More accurate equipment tends to use a light source and a light receptor, with the water placed in between – the amount of light that reaches the receptor from the light source gives a reading of how turbid the water is.
There are several pre-existing publications on how to make open source turbidity sensors (e.g. this and this). For the Sonic Kayaks, we sonify sensor data in realtime, and record the data every second for environmental mapping. This means we need to make a sensor that logs realtime continuous data and can be integrated into the existing Sonic Kayak kit, as opposed to a system where you take a one-off sample of water and run it through a separate piece of equipment in a laboratory.
We based our initial prototyping on the writeup found here. The basic electronics were tested on an Arduino Genuino Uno, with the modification in the code from pin ‘D1’ → ‘1’ (as D1 is not recognised as a pin number), and the addition of a 560Ω resistor for the white LED.
We cut the ends off a 50ml Falcon tube as this was the only tube-shaped thing we had available, drilled small holes for the LED and LDR, and sprayed the tube matt black on the inside and outside to reduce reflectivity from the shiny plastic tube. The LED and LDR were fixed in place using hot glue, wires soldered directly to the components, and the whole thing coated in bioresin for waterproofing (Fig 1).
For testing, we submerged the sensor in water for 20 minutes to check the waterproofing. We then took a sample of tap water, added a small amount of black acrylic paint, and did a series of arbitrary dilutions. LDRs decrease resistance with light intensity – so when more light hits the sensor, the less resistance there is, and the higher the voltage reading is, resulting in a higher numerical output. The numerical output is related to the voltage coming in, with an analogue to digital conversion (10 bit) applied such that 0V=0 and 5V=1023. If required, it is possible to do a lookup from the specific LDR sensor curve data to work out the voltage from the numerical output. The turbidity sensor v1 prototype returned reasonably consistent numerical values that related well to the types of results we might expect (any turbidity sensor would need to be calibrated with known samples before use).
Fig 1. Prototype v1 – Test build and wiring.
Fig 2. Test dilutions for prototype v1, with numerical output ranges.
Moving on from the proof of principle prototype v1, we made a larger turbidity sensor for prototype v2 using 40mm black plumbing pipe with longer wiring that could reach from under the kayak to the main electronics box on top of the kayak, with a single multicore cable (old network cable) that split to meet the LDR and LED. Once the components were soldered to the wiring, we used liquid electrical tape to waterproof the components and bare wire before glue-gunning the components into the tube. The cable join was then bonded to the pipe using self-amalgamating waterproof tape, just to make this weak point more robust. For this version, a mesh made from a small square of Stay Put was attached to each end of the main tube using cable ties and thin rope, to act as a filter to stop things like seaweed entering the tube. Small fishing weights were also attached to each end of tube to pull the sensor down underwater.
Fig 3. Prototype v2.
Version 2 of the turbidity sensor was integrated into the Sonic Kayak system for preliminary testing. It survived a 20 minute trip out on a lake, which is a good proof of concept for the electronics waterproofing (which is in some ways the hardest bit of the Sonic Kayak project). When paddling the sensor stayed at a reasonably constant depth, but travels sideways – ideally it would travel in line with the kayak, with the tube entrance/exit facing the front/back of the boat. Some options for improving this include fixing it to the kayak in some way, or designing fins attached to the tube (e.g. by 3D printing the housing as a single piece). The sensor was tested at the same time as two temperature sensors and a hydrophone, and we definitely need to work on making the sonifications from each sensor more distinct, as it became a cacaphony of confusing noise rather than an informative and beautiful sonic experience. The use of mesh over each end served its purpose, but a more robust solution might again be to include this as part of a 3D printed housing, or perhaps find a local bar with a stash of politically-unusable plastic straws, chop these up into small lengths, and fill the ends of the tube with them. As it stands, we have proof-of principle that this DIY sensor approach is viable, but will need to do some more work on correcting the flow direction and sonification integration before we will be happy with it.
Fig 4. Turbidity sensor prototype and the Sonic Kayak system - there are 3 kits in this photo, it's the one on the right!
Air quality sensors
This time we are following a hunch rather than pursuing an externally requested direction. Our studio is based on the edge of Falmouth harbour. This is a working harbour, used by small commercial fishing businesses, a large shipyard, houseboats, and recreational water users including yacht enthusiasts, kayakers and swimmers. Many of these users dump waste, sewage and fuel straight into the water - we routinely see slicks of fuel on the water surface and see/smell clouds of pollution in the air, and then see children jumping in the water for a swim or kayakers paddling through.
To the best of our knowledge, nobody has mapped air pollution over water, yet we believe it is likely that the local industry and other users are causing air pollution low lying over the water that could be highly damaging to the health of people and other animals that spend time on the water surface (like birds and seals). So we have started looking at integrating air quality sensors onto the Sonic Kayaks. This process begins with needing to understand the various pollutants. [apologies for the lack of subscript for the molecular formulas, it's a limitation of our web design]
Defra (the UK Government Department for Environment, Food & Rural Affairs) says this:
"Shipping is a growing sector but one of the least regulated sources of emissions of atmospheric pollutants. Shipping makes significant contributions to emissions of nitrogen oxide (NOx) and sulphur dioxide (SO2) gases, to primary PM2.5 and PM10 (particulate matter, PM with diameter less than 2.5 micrometres and 10 micrometres respectively), which includes emissions of black carbon, and to carbon dioxide. Chemical reactions in the atmosphere involving NOx and SO2, and ammonia (NH3) gas emitted from land sources (principally associated with agriculture), lead to the formation of components of secondary inorganic particulate matter. These primary and secondary pollutants derived from shipping emissions contribute to adverse human health effects in the UK and elsewhere (including cardiovascular and respiratory illness and premature death), as well as environmental damage through acidification and eutrophication."
For a little more information on the NOx and SO2 interactions they also say this:
"PM2.5 can also be formed from the chemical reactions of gases such as sulphur dioxide (SO2) and nitrogen oxides (NOx: nitric oxide, NO plus nitrogen dioxide, NO2)"
This is a totally new area for me, so my first thoughts were to go through these different pollutants and dig into what their health impacts are. The most clear information seems to be about particulate matter pollution, for example I found this from the NHS about particulate matter, saying that ‘safe levels’ are not actually safe:
"As a general rule, the lower the PM, the more dangerous the pollutant is, as very small particles are more likely to bypass the body’s defences and potentially cause lung and heart problems."
I’ve also gathered together some exposure guidelines from the World Health Organisation, the Environmental Protection Agency, and other reasonably reputable sources – the units of measurement are often different, and the limits differ depending on where you look, but it’s a start and I now feel reasonably confident that these are the pollutants that matter in our context:
Exposure guidelines WHO NO2: 40 μg/m3 annual mean, 200 μg/m3 1-hour mean
Health impacts Causes inflammation of the airways at high levels. Can decrease lung function, increase the risk of respiratory conditions and increase the response to allergens. Defra estimates that the UK death rate is 4% higher due to nitrogen dioxide pollution – around 23,500 extra deaths per year.
Exposure guidelines AEGL-1 (nondisabling – may be problematic for asthmatics) 0.20ppm, AEGL-2 (disabling) 0.75ppm, AEGL-3 (lethal) 30ppm for 10 mins – 9.6ppm for 8h. WHO: 20 μg/m3 24-hour mean, 500 μg/m3 10-minute mean
Health impacts Sulfur dioxide irritates the skin and mucous membranes of the eyes, nose, throat, and lungs. High concentrations can cause inflammation and irritation of the respiratory system. The resulting symptoms can include pain when taking a deep breath, coughing, throat irritation, and breathing difficulties. High concentrations can affect lung function, worsen asthma attacks, and worsen existing heart disease in sensitive groups.
Exposure guidelines AEGL-1 (nondisabling) – not recommended because susceptible persons may experience more serious effects at concentrations that do not affect general population. AEGL-2 (disabling) 420ppm for 10 mins – 27ppm for 8h. AEGL-1 (lethal) 1800ppm for 10 mins – 130 ppm for 8h
Health impacts Carbon monoxide enters your bloodstream and mixes with haemoglobin to form carboxyhaemoglobin. When this happens, the blood is no longer able to carry oxygen, and this lack of oxygen causes the body's cells and tissue to fail and die. A tension-type headache is the most common symptom of mild carbon monoxide poisoning. Other symptoms include: dizziness, feeling and being sick, tiredness and confusion, stomach pain, shortness of breath and difficulty breathing. Long-term exposure to low levels of carbon monoxide can lead to neurological symptoms like difficulty thinking or concentrating, and frequent emotional changes.
Exposure guidelines AEGL-1 (nondisabling) 30ppm, AEGL-2 (disabling) 220ppm for 10 mins, 110 ppm for 8h., AEGL-3 (lethal) 2700ppm for 10 mins, 390ppm for 8h.
Health impacts Irritation to eyes, nose, throat; dyspnea (breathing difficulty), wheezing, chest pain; pulmonary edema; pink frothy sputum; skin burns, vesiculation.
Pollutant Primary PM2.5
Exposure guidelines ‘there is understood to be no safe threshold below which no adverse effects would be anticipated’. 7% increase in mortality with each 5 micrograms per cubic metre increase in particulate matter with a diameter of 2.5 micrometres (PM2.5). European annual mean limit of 25μg/m3. World Health Organisation: 10 μg/m3 annual mean, 25 μg/m3 24-hour mean
Health impacts Particles in the PM2.5 size range are able to travel deeply into the respiratory tract, reaching the lungs. Exposure to fine particles can cause short-term health effects such as eye, nose, throat and lung irritation, coughing, sneezing, runny nose and shortness of breath. Exposure to fine particles can also affect lung function and worsen medical conditions such as asthma and heart disease. Scientific studies have linked increases in daily PM2.5 exposure with increased respiratory and cardiovascular hospital admissions, emergency department visits and deaths. Studies also suggest that long term exposure to fine particulate matter may be associated with increased rates of chronic bronchitis, reduced lung function and increased mortality from lung cancer and heart disease. People with breathing and heart problems, children and the elderly may be particularly sensitive to PM2.5.
Pollutant Course particulate matter: Primary PM10
Exposure guidelines World Health Organisation: 20 μg/m3 annual mean, 50 μg/m3 24-hour mean.
Health impacts As for PM2.5, but these coarser particles are of less risk than PM2.5.
The next step is to look at sensors. Via the wonders of Twitter, we were recommended alphasense for pre-made gas sensors. Apparently this technology is hard to calibrate, and cheaper sensors tend to drift in their calibration, so we might end up only being able to look at relative values if we were to produce a map of air quality over water. This might be OK, but it would be nicer to be able to compare against the ‘safe’ exposure limits. One option might be to calibrate against a more professional/pricey sensor setup at a fixed location before and after doing the mapping.
Since the world of gas sensing is mainly done using nanotechnology, it’s probably currently a bit out of the scope for in-house DIY approaches. As a compromise, we thought it was worth trying an Enviro+, which is a premade add on for a Raspberry Pi which measures air quality (pollutant gases), temperature, pressure, humidity, light, and noise level.
Fig 5. The Enviro+ that we tried and blew up
We had a go at integrating an Enviro+ into the Sonic Kayak system (no easy job given the number of different sensors we’re now trying to run), and got it working alongside the prototype turbidity sensor. The analogue to digital converter on the Enviro+ is higher resolution than we had already had on the Arduino or ATmega328 chip that we use, which is great because it means the data is more sensitive. The LCD screen was a nice touch and could prove useful for debugging. There’s an obvious problem with the design limitations though, as all our kit is sealed inside a waterproof box, with cable glands to pass wiring through the box – an air quality sensor needs to be exposed to the air, so we’d need to think about the design practicalities including waterproofing. Sadly we blew up our Enviro+ by later trying to power it from the 5V and ground pins rather than plugging it into the GPIO, as we need that free for our GPS and other sensors. Probably we just blew up the voltage regulator and could re-use the sensor components themselves. Since it seemed technically viable, we looked a bit more into what the Enviro+ is actually measuring, the makers say:
“The analog gas sensor can be used to make qualitative measurements of changes in gas concentrations, so you can tell broadly if the three groups of gases are increasing or decreasing in abundance. Without laboratory conditions or calibration, you won't be able to say "the concentration of carbon monoxide is n parts per million", for example.Temperature, air pressure and humidity can all affect particulate levels (and the gas sensor readings) too, so the BME280 sensor on Enviro+is really important to understanding the other data that Enviro+ outputs.”
Looking into these ‘three groups of gases’, it turns out that they basically have 3 sensors which detect carbon monoxide (CO, reducing), nitrogen dioxide (NO2, oxidising) and ammonia (NH3). But – these sensors are also sensitive to other very common gases (like hydrogen!) - which means that the output from a sensor doesn’t necessarily reflect the amount of the gas you are interested in, it might reflect a mix of gases. Again calibration is an issue, so we’d only ever be likely to be looking at relative values, and also we wouldn’t be sure what gasses we were actually detecting. It seems like low-cost research-grade gas sensing is still a little way off. The exception seems to be NH3, which might not be worthwhile detecting in its own right, as it only really seems to be an issue because it is a precursor for particulate matter:
“As a secondary particulate precursor, NH3 also contributes to the formation of particulate aerosols in the atmosphere. Particulate matter is an important air pollutant due to its adverse impact on human health and NH3 is therefore also indirectly linked to effects on human health”
In the interests of getting something up and running quickly, that fits with our open hardware ethos, we may be better off starting by just looking at particulate matter. Our brilliant friend and data visualiser, Miska Knapek pointed us towards Luftdaten, which he is currently working on. They have designed and published plans for a fine particulate matter (PM2.5) sensor that is open source and arduino based. The challenge with this is going to be waterproofing it for use on the boats, as unlike rain, water when kayaking can come from all directions, including all at once if you capsize. There are also pre-made cheap (£25) particulate sensors, for example this one which is small enough to use on a kayak at ~5cm and is designed to work with the Enviro+ and Raspberry Pi. These have fans to suck air through them and a laser to detect the number and size of particles in the air, and they work for various sizes of particulate matter (PM1.0, PM2.5 and PM10).
This is all a very new area for us (and it’s a big area!), so if we’ve made any mistakes or missed anything obvious we’d love to hear your ideas. It seems very feasible to add turbidity and particulate matter sensors, so if you’re interested in using these then it would also be helpful to get in touch, as we’ll need examples of practical uses if we’re to look for some funding to support adding these.
This R&D work has been funded by Smartline (European Regional Development Fund).
| 0.8261
|
FineWeb
|
What an English homework helper can do for you?
Homework helpers are all the rage these days. If you are not familiar with the term, these online services help you with your homework. You will find quite a number of agencies that provide homework help if you Google the term. It is the perfect answer to all the busy students burdened with extra work and studies. It is especially useful for students who work after school or have other responsibilities that do not leave enough time to tackle Mt. homework every day. An English homework helper can help you with:
- 1. Guidance: If all you need is some pointers, every now and then, you can as your English homework helper to customize the help in such a way that you can get assistance with the tasks assigned from school.
- 2. Lessons: Online lessons can come in many forms. Your homework helper can provide you with Audio/Video lessons. Some homework helpers offer live lessons online by expert tutors.
- 3. Notes: Your English homework helper will give you lecture notes and other texts for your use at your leisure. This works best in combination with tuitions and audio/video lessons.
- 4. Writing help: This is where things get interesting. Suppose you are given an essay-writing task, or you are required to write a term paper. Suppose you are not in a position to write it due to some reason, or are not very good at English writing. What do you do? You get in touch with a homework helper and get a) writing tips/guidelines, OR b) you get it professionally written!
- 5. Doing your homework for you: And this is where it crescendos: You can outsource your homework to a homework helper if you feel that no amount of assistance can solve your English homework problems.
While homework help is legal, you will find some bad quality online agencies. These will charge you the average or lower fees, and will provide you with essays and assignments that are either rehashes of past essays or are frank plagiarisms. Do your background research before registering with an agency and paying their fees. Ask friends, acquaintances, and classmates for a recommendation. You can also visit online students’ forums, blogs, and listings to learn about reliable agencies. Do not fall for great-sounding cheap packages. You will avoid a lot of headache and heartache if you select the right agency to assist you with your homework.
| 0.7093
|
FineWeb
|
when i had that same problem it turn out my fuel distributor went bad.the part i keep asking and never get an answer
why when the eha reads rich the dutycycle reads lean just like in your pictures.then when the eha reads lean the dutycycles read
steve said worried about the eha reading but all the info refers
to x11 for all diagnostic if you modified you air intake to bring in more air then you need to richen the fuel a little to match the air coming in. more air than fuel choke the engine and vise versa
also why is it everyybody discribe the same exsact problem
but the fix is different .
| 0.5761
|
FineWeb
|
Subject: Analysis - Internal Rate of Return (IRR)
Last-Revised: 25 June 1999
Contributed-By: Christopher Yost (cpy at world.std.com), Rich Carreiro (rlcarr at animato.arlington.ma.us)
If you have an investment that requires and produces a number of cash flows over time, the internal rate of return is defined to be the discount rate that makes the net present value of those cash flows equal to zero. This article discusses computing the internal rate of return on periodic payments, which might be regular payments into a portfolio or other savings program, or payments against a loan. Both scenarios are discussed in some detail.
We'll begin with a savings program. Assume that a sum "P" has been invested into some mutual fund or like account and that additional deposits "p" are made to the account each month for "n" months. Assume further that investments are made at the beginning of each month, implying that interest accrues for a full "n" months on the first payment and for one month on the last payment. Given all this data, how can we compute the future value of the account at any month? Or if we know the final value of the account and the investments made over time, what was the interal rate of return?
The relevant formula that will help answer these questions is:
F = -P(1+i)^n - [p(1+i)((1+i)^n - 1)/i]
- "F" is the future value of your investment; i.e., the value after "n" months or "n" weeks or "n" years--whatever the period over which the investments are made)
- "P" is the present value of your investment; i.e., the amount of money you have already invested (a negative value - see below)
- "p" is the payment each period (a negative value - see below)
- "n" is the number of periods you are interested in (number of payments)
- "i" is the interest rate per period.
Note that the symbol '^' is used to denote exponentiation (for example, 2 ^ 3 = 8).
Very important! The values "P" and "p" above should be negative. This formula and the ones below are devised to accord with the standard practice of representing cash paid out as negative and cash received (as in the case of a loan) as positive. This may not be very intuitive, but it is a convention that seems to be employed by most financial programs and spreadsheet functions.
The formula used to compute loan payments is very similar, but as is appropriate for a loan, it assumes that all payments "p" are made at the end of each period:
F = -P(1+i)^n - [p((1+i)^n - 1)/i]
Note that this formula can also be used for investments if you need to assume that they are made at the end of each period. With respect to loans, the formula isn't very useful in this form, but by setting "F" to zero, the future value (one hopes) of the loan, it can be manipulated to yield some more useful information.
To find what size payments are needed to pay-off a loan of the amount "P" in "n" periods, the formula becomes this:
-Pi(1+i)^n p = ----------- (1+i)^n - 1
If you want to find the number of periods that will be required to pay-off a loan use this formula:
log(-p) - log(-Pi - p) n = ---------------------- log(1+i)
Keep in mind that the "i" in all these formula is the interest rate per period. If you have been given an annual rate to work with, you can find the monthly rate by adding 1 to annual rate, taking the 12th root of that number, and then subtracting 1. The formula is:
i = ( r + 1 ) ^ 1/12 - 1
where "r" is the rate.
Conversely, if you are working with a monthly rate--or any periodic rate--you may need to compound it to obtain a number you can compare apples-to-apples with other rates. For example, a 1 year CD paying 12% in simple interest is not as good an investment as an investment paying 1% compounded per month. If you put $1000 into each, you'll have $1120 in the CD at the end of the year but $1000*(1.01)^12 = $1126.82 in the other investment due to compounding. In this way, interest rates of any kind can be converted to a "simple 1-year CD equivalent" for the purposes of comparison. (See the article "Computing Compound Return" for more information.)
You cannot manipulate these formulas to get a formula for "i", but that rate can be found using any financial calculator, spreadsheet, or program capable of calculating Internal Rate of Return or IRR.
Technically, IRR is a discount rate: the rate at which the present value of a series of investments is equal to the present value of the returns on those investments. As such, it can be found not only for equal, periodic investments such as those considered here but for any series of investments and returns. For example, if you have made a number of irregular purchases and sales of a particular stock, the IRR on your transactions will give you a picture of your overall rate of return. For the matter at hand, however, the important thing to remember is that since IRR involves calculations of present value (and therefore the time-value of money), the sequence of investments and returns is significant.
Here's an example. Let's say you buy some shares of Wild Thing Conservative Growth Fund, then buy some more shares, sell some, have some dividends reinvested, even take a cash distribution. Here's how to compute the IRR.
You first have to define the sign of the cash flows. Pick positive for flows into the portfolio, and negative for flows out of the portfolio (you could pick the opposite convention, but in this article we'll use positive for flows in, and negative for flows out).
Remember that the only thing that counts are flows between your wallet and the portfolio. For example, dividends do NOT result in cash flow unless they are withdrawn from the portfolio. If they remain in the portfolio, be they reinvested or allowed to sit there as free cash, they do NOT represent a flow.
There are also two special flows to define. The first flow is positive and is the value of the portfolio at the start of the period over which IRR is being computed. The last flow is negative and is the value of the portfolio at the end of the period over which IRR is being computed.
The IRR that you compute is the rate of return per whatever time unit you are using. If you use years, you get an annualized rate. If you use (say) months, you get a monthly rate which you'll then have to annualize in the usual way, and so forth.
On to actually calculating it... We first have the net present value or NPV:
N NPV(C, t, d) = Sum C[i]/(1+d)^t[i] i=0where:
- C[i] is the i-th cash flow (C is the first, C[N] is the last).
- d is the assumed discount rate.
- t[i] is the time between the first cash flow and the i-th. Obviously, t=0 and t[N]=the length of time under consideration. Pick whatever units of time you like, but remember that IRR will end up being rate of return per chosen time unit.
Given that definition, IRR is defined by the equation:
NPV(C, t, IRR) = 0.
In other words, the IRR is the discount rate which sets the NPV of the given cash flows made at the given times to zero.
In general there is no closed-form solution for IRR. One must find it iteratively. In other words, pick a value for IRR. Plug it into the NPV calculation. See how close to zero the NPV is. Based on that, pick a different IRR value and repeat until the NPV is as close to zero as you care.
Note that in the case of a single initial investment and no further investments made, the calculation collapses into:
(Initial Value) - (Final Value)/(1+IRR)^T = 0 or (Initial Value)*(1+IRR)^T - (Final Value) = 0 Initial*(1+IRR)^T = Final (1+IRR)^T = Final/Initial And finally the quite familiar: IRR = (Final/Inital)^(1/T) - 1
You can probably calculate IRR in your favorite spreadsheet program. A little command-line program named 'irr' that calculates IRR is also available. See the article Software - Archive of Investment-Related Programs in this FAQ for more information.
Previous article is Analysis: Goodwill
Next article is Analysis: Loan Payments and Amortization
Category is Analysis|
Index of all articles
| 0.9039
|
FineWeb
|
This section highlights the ways in which new and ongoing National Institute on Aging (NIA)-supported programs, centers, and collaborative efforts are advancing Alzheimer’s research.
A key component of the Federal research program for Alzheimer’s disease is to create and sustain an infrastructure that supports and enhances scientific discovery and translation of discoveries into Alzheimer’s disease prevention and treatment. NIA’s coordinating mechanisms and key initiatives are central to this effort. Specifically, important advances are being made by supporting high-quality research, from which data can be pooled and shared widely and efficiently through a well-established Alzheimer’s disease research infrastructure.
The infrastructure and initiatives described in this report seek to bring together researchers and Alzheimer’s interests by:
- Convening and collaborating in workshops addressing new scientific areas
- Working across NIH to vigorously discuss new science and opportunities for new investment
- Partnering with other Federal agencies, not-for-profit groups, and industry in the shared goals of improved treatments, new prevention strategies, and better programs for people with Alzheimer’s and their caregivers.
The current research infrastructure supported by NIH includes:
NIA Intramural Research Program (NIA IRP). In addition to funding a broad portfolio of aging-related and Alzheimer’s research at institutions across the country, NIA supports its own laboratory and clinical research program, based in Baltimore and Bethesda, MD. The NIA IRP focuses on understanding age-related changes in physiology and behavior, the ability to adapt to biological and environmental stresses, and the pathophysiology of age-related diseases such as Alzheimer’s.
Laboratory research ranges from studies in basic biology, such as neurogenetics and cellular and molecular neurosciences, to examinations of personality and cognition. The IRP also conducts clinical trials to test possible new interventions for cognitive decline and Alzheimer’s disease. The IRP leads the Baltimore Longitudinal Study of Aging (BLSA), America’s longest-running scientific study of human aging, begun in 1958, which has provided valuable insights into cognitive change with age.
The IRP’s Laboratory of Behavioral Neuroscience is identifying brain changes that may predict age-related declines in memory or other cognitive functions. Using brain imaging techniques, such as magnetic resonance imaging, which measures structural changes, and positron emission tomography scans, which measure functional changes, IRP researchers are tracking memory and cognitive performance over time to help identify both risk and protective factors for dementia. For example, an IRP study involving more than 500 BLSA participants uses brain imaging, biomarkers, and cognitive assessments to track changes in cognitive function in people who do not develop Alzheimer’s and in those who develop cognitive impairment and dementia.
Additionally, IRP researchers help identify potential drug targets for Alzheimer’s disease, screening candidate drugs for efficacy in cell culture or animal models. The most effective compounds are moved through preclinical studies to clinical trials. IRP researchers also collaborate with academia and industry to develop agents that show promise as an Alzheimer’s intervention. Industry has licensed patents covering a variety of novel compounds from NIA for preclinical and clinical development.
NIA funds 27 Alzheimer’s Disease Centers nationwide. See a state-by-state list.
Alzheimer’s Disease Centers (ADCs). NIA-supported research centers form the backbone of the national Alzheimer’s disease research effort. These multidisciplinary centers, located at 27 institutions nationwide, promote research, training and education, and technology transfer. Thanks to the participation of people in their communities, the Centers conduct longitudinal, multi-center, collaborative studies of Alzheimer’s disease diagnosis and treatment, age-related neurodegenerative diseases, and predictors of change in people without dementia that may indicate the initial stages of disease development.
The ADCs also conduct complementary studies, such as imaging studies and autopsy evaluations. All participants enrolled in the Centers receive a standard annual evaluation. Data from these evaluations are collected and stored by the National Alzheimer’s Coordinating Center (NACC; see below) as the Uniform Data Set. The ADCs serve as sites for a number of major studies, such as national clinical trials and imaging and biomarker research.
Alzheimer’s Disease Translational Research Program: Drug Discovery, Preclinical Drug Development, and Clinical Trials. NIA has a longstanding commitment to translational research for Alzheimer’s disease. In 2005, the Institute put this effort into high gear by launching a series of initiatives aimed at supporting all steps of drug discovery through clinical development. The program’s goal is to seed preclinical drug discovery and development projects from academia and from small biotechnology companies and, in doing so, to increase the number of investigational new drug candidates that can be tested in humans.
This strategic investment has led to the relatively rapid creation of a large, diverse portfolio of projects aimed at discovery and preclinical development of novel candidate therapeutics. To date, NIA has supported more than 60 early drug discovery projects and 18 preclinical drug development projects through this program. Fifteen of the 18 preclinical drug development projects are for compounds against non-amyloid therapeutic targets, such as tau, ApoE4, pathogenic signaling cascades, and neurotransmitter receptors. Four candidate compounds projects have advanced to the clinical development stage.
This program supports outreach and education activities held at regular investigators’ meetings and at an annual drug discovery training course organized by the Alzheimer’s Drug Discovery Foundation. These meetings provide much-needed networking opportunities for NIA-funded investigators and industry and regulatory experts, as well as education of a new cadre of academic scientists.
Two major program initiatives are:
The Alzheimer’s Disease Cooperative Study develops and tests new Alzheimer’s interventions and treatments that might not otherwise be developed by industry.
- Alzheimer’s Disease Pilot Clinical Trials Initiative. This ongoing initiative, begun in 1999, seeks to increase the number and quality of preliminary clinical evaluations of interventions for Alzheimer’s, mild cognitive impairment, and age-associated cognitive decline. These trials are investigating drug and nondrug prevention and treatment interventions. The goal is not to duplicate or compete with the efforts of pharmaceutical companies but to encourage, complement, and accelerate the process of testing new, innovative, and effective treatments. The National Institute of Nursing Research, part of NIH, also participates in this initiative. See Testing Therapies to Treat, Delay, or Prevent Alzheimer’s Disease to learn more about the trials and to see a complete list of treatment and prevention trials.
- Alzheimer’s Disease Cooperative Study (ADCS). NIA launched the ADCS in 1991 to develop and test new interventions and treatments for Alzheimer’s disease that might not otherwise be developed by industry. Currently operated under a cooperative agreement with the University of California, San Diego, this large clinical trials consortium comprises more than 70 sites throughout the United States and Canada.
The ADCS focuses on evaluating interventions that will benefit Alzheimer’s patients across the disease spectrum. This work includes testing agents that lack patent protection, agents that may be useful for Alzheimer’s but are under patent protection and marketed for other indications, and novel compounds developed by individuals, academic institutions, and small biotech companies. The ADCS also develops new evaluation instruments for clinical trials and innovative approaches to clinical trial design.
Since its inception, the ADCS has initiated 32 research studies (25 drug and 7 instrument development protocols.) The ADCS also provides infrastructure support to other federally funded clinical efforts, including the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Dominantly Inherited Alzheimer Network (DIAN). (Read more about these studies below.)
National Alzheimer’s Coordinating Center (NACC). NIA established the NACC in 1999 with the goal of pooling and sharing data on participants in ADC studies. By 2005, NACC had collected data, including neuropathological data from 10,000 brain autopsies, from some 77,000 ADC study participants. NACC then added clinical evaluations and annual follow-ups to its protocol, enriching the database with detailed longitudinal data from 26,500 participants and 2,100 brain autopsies. The data are available to Alzheimer’s researchers worldwide.
NACC data are helping to reveal different symptom patterns in different subsets of people with Alzheimer’s, patterns that would not have become apparent without analyzing a data set of this size. NACC also helps coordinate other NIA efforts, such as the identification and selection of appropriate post mortem material collected at ADCs to send to the National Cell Repository for Alzheimer’s Disease.
National Cell Repository for Alzheimer’s Disease (NCRAD). This NIA-funded repository located at Indiana University Medical Center in Indianapolis, provides resources that help researchers identify the genes that contribute to Alzheimer’s and other types of dementia. NCRAD collects and maintains biological specimens and associated data on study volunteers from a variety of sources, primarily people enrolled at the ADCs as well as those in ADNI, the Alzheimer’s Disease Genetics Consortium, and other studies. NCRAD also houses DNA samples and data from more than 900 families with multiple members affected by Alzheimer’s.
Qualified research scientists may apply to NCRAD for samples and data to conduct genetic research. Since it was funded 22 years ago, more than 150,000 biological samples have been requested and sent to more than 120 investigators and cores across the world.
NIA Genetics of Alzheimer’s Disease Data Storage Site (NIAGADS). Located at the University of Pennsylvania, NIAGADS is a Web-based warehouse for Alzheimer’s disease genetic data. All genetic data derived from NIA-funded studies on the genetics of late-onset Alzheimer’s are deposited at NIAGADS, another NIA-approved site, or both. NIAGADS currently houses 22 data sets with nearly 44,000 subjects and more than 24 billion genotypes. Data from genome-wide association studies (GWAS) that are stored at NIAGADS are also made available through the database of Genotype and Phenotype (dbGaP) at the National Library of Medicine’s National Center for Biotechnology Information, which was established to archive and distribute the results of large-scale GWAS analyses. Through dbGaP, data sets from multiple GWAS done on different platforms can be merged, and data from thousands of study participants can be analyzed together, increasing the probability of gene discovery.
Alzheimer’s Disease Education and Referral (ADEAR) Center. Congress created the ADEAR Center in 1990 to compile, archive, and disseminate information concerning Alzheimer’s disease for people with Alzheimer’s disease, their families, health professionals, and the public. Operated by NIA, the ADEAR Center is a current and comprehensive resource for Alzheimer’s disease information and referrals. All of its information about research and materials on causes, diagnosis, treatment, prevention, and caregiving are carefully researched, evidence-based, and reviewed for accuracy and integrity.
NIA supports and participates in several innovative research initiatives that are crucial to the advancement of Alzheimer’s research. These include highly collaborative and international efforts to uncover the basic mechanisms of Alzheimer’s disease, the biomarkers that signal stages of the disease, and efforts to better understand the aging brain. These research initiatives include:
The Alzheimer’s Disease Neuroimaging Initiative seeks to identify neuroimaging and other biomarkers that can detect disease progression and measure the effectiveness of potential therapies.
Alzheimer’s Disease Neuroimaging Initiative (ADNI). NIA launched this groundbreaking initiative in 2004. It is the largest public-private partnership to date in Alzheimer’s disease research, receiving generous support from private-sector companies and foundations through the Foundation for the National Institutes of Health. ADNI’s goal is to find neuroimaging and other biological markers that can detect disease progression and measure the effectiveness of potential therapies.
In the first phase of ADNI, researchers recruited 800 participants, a mix of cognitively healthy people and those with Alzheimer’s disease or MCI. To speed the pace of analysis and findings, ADNI investigators agreed to make their collected data widely available. Magnetic resonance maging and positron emission tomography brain images as well as clinical, genetic, and fluid biomarker data are available to qualified researchers worldwide through a Web-based database.
Findings from this initiative have generated excitement about using brain and fluid biomarkers to identify people at risk for developing Alzheimer’s or to characterize the pace of deterioration. Accomplishments include new findings about how changes in the structure of the hippocampus may help gauge disease progression and the effectiveness of potential treatments, and the establishment of biomarker and imaging measures that predict risk for cognitive decline and conversion to dementia.
A follow-on effort, ADNI-GO, was launched with American Recovery and Reinvestment Act funds in 2009, followed by ADNI 2 in 2010. ADNI 2 builds on the success of earlier ADNI phases to identify the earliest signs of Alzheimer’s disease. It set a 5-year goal to recruit 550 volunteers, age 55 to 90, at 55 sites in the United States and Canada. The volunteers include people with no apparent memory problems, people with early and late MCI, and people with mild Alzheimer’s disease.
The volunteers will be followed to help define the changes in brain structure and function that take place when they transition from normal cognitive aging to MCI, and from MCI to Alzheimer’s dementia. The study uses imaging techniques and biomarker measures in blood and cerebrospinal fluid specially developed to track changes in the living brain. Researchers hope to identify who is at risk for Alzheimer’s, track progression of the disease, and devise tests to measure the effectiveness of potential interventions. ADNI2 continues to follow participants recruited for the other ADNI cohorts.
ADNI has been remarkably fruitful. To date, more than 430 papers using ADNI data have been published from investigators around the world, and many more will come as more data are collected and analyzed. The success of ADNI has also inspired similar efforts in Europe, Japan, and Australia.
Dominantly Inherited Alzheimer’s Disease Network (DIAN). NIA launched this 6-year study in 2008 to better understand the biology of early-onset Alzheimer’s, a rare, inherited form of the disease that can occur in people in their 30s, 40s, and 50s. People born with a certain gene mutation not only develop Alzheimer’s disease before age 60 but have a 50–50 chance of passing the gene on to their children. When Alzheimer’s disease is caused by a genetic mutation, about 50 percent of the people in the family tree get the illness before age 60.
Scientists involved in this collaborative, international effort hope to recruit 300 adult children of people with Alzheimer’s disease to help identify the sequence of brain changes that take place before symptoms appear. By understanding this process, researchers hope to gain additional insights into the more common late-onset form of the disease.
Until DIAN, the rarity of the condition and geographic distances between affected people and research centers hindered research. Today, volunteers age 18 and older with at least one biological parent with the disease are participating in DIAN at a network of 13 research sites in the United States, England, Germany, and Australia. Each participant receives a range of assessments, including genetic analysis, cognitive testing, and brain scans, and donates blood and cerebrospinal fluid so scientists can test for biomarkers.
DIAN researchers are building a shared database of the assessment results, samples, and images to advance knowledge of the brain mechanisms involved in Alzheimer’s, eventually leading to targets for therapies that can delay or even prevent progress of the disease. The study is led by the ADC at Washington University School of Medicine in St. Louis.
Alzheimer’s Disease Genetics Initiative (ADGI) and Alzheimer’s Disease Genetics Consortium (ADGC). The study of Alzheimer’s disease genetics is complicated by the likelihood that the risk of late-onset Alzheimer’s is influenced by many genes, each of which probably confers a relatively small risk. Identifying these genes requires analyzing the genomes of large numbers of people. ADGI was launched in 2003 to identify at least 1,000 families with multiple members who have late-onset Alzheimer’s as well as members who do not. In 2009, NIA funded the ADGC to support the use of large-scale, high-throughput genetics technologies, which allow the analysis of large volumes of genetic data, needed by researchers studying late-onset Alzheimer’s.
These initiatives are achieving important results. The ADGC, for example, was one the founding partners of a highly collaborative, international group that announced the identification of 11 new Alzheimer’s risk genes in 2013. Combining previously studied and newly collected DNA data from 74,076 older volunteers with Alzheimer’s and those free of the disease from 15 countries, the research offers important new insights into the disease pathways involved in Alzheimer’s disease.
Research Partnership on Cognitive Aging. Through the Foundation for the National Institutes of Health, NIA and the McKnight Brain Research Foundation established the Research Partnership on Cognitive Aging in 2007 to advance our understanding of healthy brain aging and function. The partnership is currently supporting grants funded through two research Requests for Applications: “Neural and Behavioral Profiles of Cognitive Aging” and “Interventions to Remediate Age-related Cognitive Decline.” To date, Partnership-supported researchers have published 107 scientific papers. The Partnership, with co-sponsorship from the National Center for Complementary and Alternative Medicine and the NIH Office of Behavioral and Social Sciences Research, released a new Request for Application in late 2013, “Plasticity and Mechanisms of Cognitive Remediation in Older Adults,” and expects to award grants in summer 2014.
This public-private collaboration is expanding its outreach. In 2013, the McKnight Brain Research Foundation, with co-sponsorship from NIA and the National Institute of Neurological Disorders and Stroke, AARP, and the Retirement Research Foundation, contracted with the Institute of Medicine to conduct “Public Health Dimensions of Cognitive Aging.” The study is examining cognitive health and aging with a focus on epidemiology and surveillance, prevention and intervention opportunities, education of health professionals, and new approaches to enhance awareness and disseminate information to the public. The technical report, including commissioned papers, conclusions, and recommendations, will be released in 2015.
NIH Toolbox for Assessment of Neurological and Behavioral Function. Supported by the NIH Blueprint for Neuroscience Research and the NIH Office of Behavioral and Social Sciences Research, researchers developed this set of brief tests to assess cognitive, sensory, motor, and emotional function, particularly in studies that enroll many people, such as epidemiological studies and clinical trials. These royalty-free tests, developed under a contract with NIH, were unveiled in September 2012. Available in English and Spanish and applicable for use in people age 3 to 85 years, the measures enable direct comparison of cognitive and other abilities at different ages across the lifespan.
Human Connectome Project. The NIH Blueprint for Neuroscience Research, a group of 15 NIH institutes and offices engaged in brain-related research, started the Human Connectome Project in 2010 to develop and share knowledge about the structural and functional connectivity of the healthy human brain. This collaborative effort uses cutting-edge neuroimaging instruments, analysis tools, and informatics technologies to map the neural pathways underlying human brain function. Investigators will map these connectomes in 1,200 healthy adults—twin pairs and their siblings—and will study anatomical and functional connections among regions of the brain.
The data gathered will be related to behavioral test data collected using another NIH Blueprint research tool, the NIH Toolbox for Assessment of Neurological and Behavioral Function (see above), and to data on participants’ genetic makeup. The goals are to reveal the contributions of genes and environment in shaping brain circuitry and variability in connectivity and to develop faster, more powerful imaging tools. Advancing our understanding of normal brain connectivity may one day inform Alzheimer’s research.
BRAIN Initiative. The NIH Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative is part of a new Presidential focus aimed at revolutionizing our understanding of the human brain. The BRAIN Initiative aims to accelerate work on technologies that give a dynamic picture of how individual cells and complex neural circuits interact in real time. The ultimate goal is to enhance understanding of the brain and improve prevention, diagnosis, and treatment of brain diseases such as Alzheimer’s. The National Science Foundation and Defense Advanced Research Projects Agency are partnering with NIH in this initiative.
| 0.636
|
FineWeb
|
From the World Wide Web to the groundbreaking game Wolfenstein 3D, here are some of the most famous software innovations that were built using the NeXT computer operating system called NeXTSTEP.
After Steve Jobs was forced out of Apple in 1985, he helped build two other companies, Pixar and NeXT. Pixar, of course, went on to produce a string of animated blockbuster films starting with Toy Story in 1995. But what about NeXT?
In the history of software, the computer company NeXT only existed a short but very influential period of time. But during that time, the NeXTSTEP computer operating system helped create some of the most famous software innovations in history.
The First Web Server And Web Browser: 1990
On March 12, 1989, Tim Berners-Lee submitted a proposal titled “Information Management: A Proposal” detailing the first concept of the World Wide Web. His boss, Mike Sendall, found the idea worthy enough to approve the purchase of one of the first NeXTcube computers in 1990. The retail price for the NeXT computer in 1990 was $10,000 each. Adjusting for inflation, $10,000 in the year 2020 is about $20,000.
Tim Berners-Lee then used this NeXT computer to create the first-ever web browser and website server. The web-based Internet as we know it today was created using a NeXTcube computer.
To help keep the web online, he had to attach a sticker on the side of the computer warning others not to turn it off. At the time, turning off that computer would have essentially turned off the World Wide Web. The actual NeXTcube computer that Tim Berners-Lee’s used to create the web is now on display at the Science Museum in London, UK.
Wolfenstein 3D, Doom, And Quake: 1992-1996
In the early 1990s, computer programmer John Carmack used the NeXT operating system to build three of the most groundbreaking video game series of the decade, Wolfenstein 3D (1992), Doom (1993), and Quake (1996).
Wolfenstein 3D was the first 3D first-person shooter game in history. It’s successor, DOOM, was a mega-hit and immediately paved the way for a series of popular 3D shooters including Marathon (1994), Star Wars: Dark Forces (1995), Duke Nukem 3D (1996), GoldenEye 007 (1997), Half-Life (1998), Unreal (1998), and Halo (2001), to name a few.
Carmack followed up the success of Doom with Quake in the late 1990s. Quake featured real-time 3D rendering technology, multiplayer deathmatches and a soundtrack by Trent Reznor‘s band Nine Inch Nails.
Display PostScript (DPS): 1987
In the late 1980s, developers at Adobe and NeXT collaborated to create a new 2D graphics engine system for the NeXT computer operating system called Display PostScript (DPS). At the time in 1987, no other computer system had object-oriented capabilities able to handle this advanced display technology. The technology was originally developed for computer printing but was useful in everything from graphic design to the user interface in applications and operating systems.
CyberSlice: The First Online Food Delivery System: 1995
Decades before DoorDash and Grubhub, NeXT technology helped create the first online food delivery system in history called CyberSlice. Steve Jobs got the idea after seeing Sandra Bullock‘s character in the 1995 film, The Net, order a pizza online. Jobs decided to make the Hollywood concept a reality and used NeXT computers and GIS-based geolocation technology to place the first online food order in history. What did he order? A pizza with tomato and basil.
Materials connected to the CyberSlice project were curated into the “Inventions of the 20th Century, Computer Science” collection at the Smithsonian Institute in Washington DC.
Apple Operating Systems
In the mid-1990s, Apple had a serious problem to solve. They needed to make a major advancement in their operating system and were struggling to find a worthy successor to Mac OS 9. Both the BeOS and Copland were contenders but weren’t strong enough to move forward with.
However, in the decade that Steve Jobs was away from Apple, his company NeXT created a product so advanced that it had a client list that included Dell, Disney, the National Security Agency (NSA), the Central Intelligence Agency (CIA), BBC, and the National Reconnaissance Office amoung others. NeXTSTEP was the obvious choice to be the successor of Macintosh OS 9.
In 1997, Apple acquired NeXT for $429 million dollars. That deal not only gave the company NeXT’s revolutionary operating system called NeXTSTEP, but it also brought Steve Jobs back to Apple.
There are countless features and applications from NeXTSTEP that you can still find in the Apple operating system family today including Mac OS X, macOS, iOS, iPadOS, watchOS, and tvOS. Although there’s a lot going on under the hood, visible interface elements like the dock, spinning beach ball, and column view as well as applications such as TextEdit and Chess, are descendants of NeXTSTEP applications.
Famous Achievements In Software History That Were Built Using The NeXT Operating System
The NeXT operating system only existed from 1998-1997. But during that short time, it was responsible for several noteworthy achievements in computing and software history. Did you own a NeXT computer? Please tell us about your experiences in the comments or tweet us at @methodshop.
| 0.5139
|
FineWeb
|
Where do you want to go on your next family day out? Space? Why not!
The Adler Planetarium is a great place in Chicago to explore what is out there above Earth's atmosphere. It's also America's FIRST planetarium, being founded all the way back in 1930, so there is a fascinating history to learn about before you even get to a telescope.
If you love all things space, you will love the exhibits that the Adler Planetarium has for you. Have you ever looked up into the sky and wondered what else there is to know about the moon? Mission Moon has the answer! You can take a journey and discover all the dangers and thrills of what is really means for those astronauts to take a trip to the moon.
There are exhibits which give kids a chance to learn about the history of cultures of the world too. Astronomy In Culture looks at what other cultures of the past thought about the moon - from South America to Egypt. Even the Middle East! The moon has been around long before even dinosaurs, so just think - all those great figures of the past will have seen the very same moon that you are looking at!
You're in the wonderful city of Chicago, so it's only fitting there is an exhibit on what the sky above Chicago in 1913 would have looked like. It's true, for all those space buffs, the stars wouldn't have changed positions. BUT - there was a lot less light pollution, so when you looked up into the sky over 100 years ago - it would have looked like a blanket of stars! Can you imagine looking up now and seeing something so amazing?
From planets to the solar system, there are plenty of exhibits covering some pretty amazing topics!
Adler Planetarium also has overnight stays, after school hang outs, and three theaters where you can catch some pretty cool films about space!
Are you excited to explore space!
| 0.5252
|
FineWeb
|
Double vision (diplopia)
When a person experiences double vision, or diplopia, they see two images of the same thing at the same time.
Double vision may be a long-term problem, or the symptoms may come and go.
Double vision may affect a person's ability to drive safely and the DVLA may need to be told about the condition.
What causes diplopia or double vision?
Opening your eyes and seeing a single clear image is something you probably take for granted. But that seemingly automatic process depends on the orchestration of multiple areas of the vision system. They all need to work together seamlessly:
- The cornea is the clear outermost disc covering the eye. It allows in light.
- The lens is behind the pupil. It focuses light onto the retina.
- Muscles of the eye, called extraocular muscles, perform the eye's precise movements.
- Nerves carry visual information from the eyes to the brain.
- The brain is where several areas process visual information from the eyes.
Problems with any part of the vision system can lead to diplopia. It makes sense to consider the causes of diplopia according to the part of the visual system that has the problem.
Cornea problems. Problems with the cornea often cause double vision in one eye only. Covering the affected eye makes the diplopia go away. The damaged surface of the eye distorts incoming light, causing double vision. Damage can happen in several ways:
- Infections of the cornea, such as shingles ( herpes zoster), can distort the cornea.
- An uncommon complication of LASIK surgery ( laser eye surgery) can leave one cornea altered, creating unequal visual images.
Lens problems. Cataracts are the most common problem with the lens that causes double vision. If cataracts are present in both eyes, images from both eyes will be distorted. Cataracts are often correctable with surgery.
Muscle problems. If a muscle in one eye is weak, that eye can't move smoothly with the healthy eye. Gazing in directions controlled by the weak muscle causes double vision. Muscle problems can result from several causes:
- Myasthenia gravis is an autoimmune illness that blocks the stimulation of muscles by nerves inside the head. The earliest signs are often double vision and drooping eyelids (ptosis).
- Graves' disease is a thyroid condition that weakens the muscles of the eyes. Graves' disease commonly causes vertical diplopia. With vertical diplopia, one image is on top of the other.
Nerve problems. Several different conditions can damage the nerves and lead to double vision:
- Multiple sclerosis can affect nerves anywhere in the brain or spinal cord. If the nerves controlling the eyes are damaged, double vision can result.
- Guillain-Barre syndrome is a nerve condition that causes progressive weakness. Sometimes, the first symptoms occur in the eyes and cause double vision.
- Uncontrolled diabetes can lead to nerve damage in one of the eyes, causing eye weakness and diplopia.
Brain problems. The nerves controlling the eyes connect directly to the brain. Further visual processing takes place inside the brain. Many different causes for diplopia originate in the brain. They include:
| 0.9988
|
FineWeb
|
What people are saying - Write a review
Muhammad Iqbal occupies a unique place in history,not only because of his poetry but due to his political contribution and universal appeal.He was deadly against slavery and termed it a death.Iqbal is of the view that when a peson recognises his hidden potentials he becomes capable of realizing the creator of universeand mystery of universe. Any one who recognises the existence of God, cannot tolerate rule of any other entity human or non human
His persian poetry is more penetrating and deep rooted if we compare with his urdu verse.Nicholson has devoted his ceaseless efforts to reach to the depth of the message of Iqbal.From persian he has translated iqbal s poetry into standard English.
Many other writers have tried to translate Iqbals verses but Nicholson is great
Muhammad Ayub Munir
| 0.5171
|
FineWeb
|
US 7576043 B2
A wellbore fluid comprising a surfactant, the surfactant having the formula (R1—X)nZ, wherein R1 is an aliphatic group—comprising a C18-C22 principal straight chain bonded at a terminal carbon atom thereof to X, and comprising at least one C1-C2 side chain—X is a charged head group, Z is a counterion, and n is an integer which ensures that the surfactant is charge neutral, and wherein the charged head group X is selected to provide that the surfactant is soluble in oil and at least one part of the charged head group is anionic.
1. A wellbore fluid configured for use in hydrocarbon recovery, comprising an aqueous solution of:
a surfactant, the surfactant in said solution consisting of a thickening amount of surfactant which is soluble in aqueous solutions and has the formula
R1 is an aliphatic group comprising a C16-C24 principal straight chain bonded at a terminal carbon atom thereof to X, and comprising at least one C1 or C2 side chain; and
X being a charged head group, Z being a counterion, and n being an integer which ensures that the surfactant is charge neutral; and wherein:
the charged head group X is selected to provide that the surfactant is soluble in oil; and
at least one part of the charged head group is anionic
wherein the wellbore fluid is a viscoelastic gel and wherein said gel undergoes a reduction in viscosity on contact with oil.
2. The wellbore fluid according to
3. The wellbore fluid according to
4. The wellbore fluid according to
5. The wellbore fluid according to
This application claims the benefit of and is a continuation of U.S. application Ser. No. 10/343,401 U.S. Pat. No. 7,196,041 filed on Oct. 15, 2003, which is incorporated by reference in its entirety for all purposes.
The present invention relates to a surfactant, and in particular to a surfactant thickening agent for use in hydrocarbon recovery.
In the recovery of hydrocarbons, such as oil and gas, from natural hydrocarbon reservoirs, extensive use is made of wellbore fluids such as drilling fluids, completion fluids, work over fluids, packer fluids, fracturing fluids, conformance or permeability control fluids and the like.
In many cases significant components of wellbore fluids are thickening agents, usually based on polymers or viscoelastic surfactants, which serve to control the viscosity of the fluids. Typical viscoelastic surfactants are N-erucyl-N,N-bis(2-hydroxyethyl)-N-methyl ammonium chloride and potassium oleate, solutions of which form gels when mixed with corresponding activators such as sodium salicylate and potassium chloride.
The surfactant molecules are characterized by having one long hydrocarbon chain per surfactant headgroup. In the viscoelastic gelled state these molecules aggregate into worm-like micelles. Gel breakdown occurs rapidly when the fluid contacts hydrocarbons which cause the micelles to change structure or disband.
In practical terms the surfactants act as reversible thickening agents so that, on placement in subterranean reservoir formations, the viscosity of a wellbore fluid containing such a surfactant varies significantly between water- or hydrocarbon-bearing zones of the formations. In this way the fluid is able preferentially to penetrate hydrocarbon-bearing zones.
The use of viscoelastic surfactants for fracturing subterranean formations is discussed in EP-A-0835983.
A problem associated with the use of viscoelastic surfactants is that stable oil-in-water emulsions are often formed between the low viscosity surfactant solution (i.e. broken gel) and the reservoir hydrocarbons. As a consequence, a clean separation of the two phases can be difficult to achieve, complicating clean up of wellbore fluids. Such emulsions are believed to form because conventional wellbore fluid viscoelastic surfactants have little or no solubility in organic solvents.
A few anionic surfactants exhibit high solubility in hydrocarbons but low solubility in aqueous solutions. A well known example is sodium bis(2-ethylhexyl) sulphosuccinate, commonly termed aerosol OT or AOT (see K. M. Manoj et al., Langmuir, 12, 4068-4072, (1996)). However, AOT does not form viscoelastic solutions in aqueous media, e.g. the addition of salt causes precipitation.
A number of cationic surfactants, based on quaternary ammonium and phosphonium salts, are known to exhibit solubility in water and hydrocarbons and as such are frequently used as phase-transfer catalysts (see C. M. Starks et al., Phase-Transfer Catalysis, pp. 125-153, Chapman and Hall, New York (1994)). However, those cationic surfactants which form viscoelastic solutions in aqueous media are poorly soluble in hydrocarbons, and are characterized by values of Kow very close to zero, Kow being the partition coefficient for a surfactant in oil and water (Kow=Co/Cw, where Co and Cw are respectively the surfactant concentrations in oil and water). Kow may be determined by various analytical techniques, see e.g. M. A. Sharaf, D. L. Illman and B. R. Kowalski, Chemometrics, Wiley Interscience, (1986), ISBN 0471-83106-9.
Typically, high solubility of the cationic surfactant in hydrocarbon solvents is promoted by multiple long-chain alkyl groups attached to the head group, as found e.g. in hexadecyltributylphosphonium and trioctylmethylammonium ions. In contrast, cationic surfactants which form viscoelastic solutions generally have only one long unbranched hydrocarbon chain per surfactant headgroup.
The conflict between the structural requirements for achieving solubility in hydrocarbons and for the formation of viscoelastic solutions generally results in only one of these properties being achieved.
An object of the present invention is to provide a surfactant which is suitable for reversibly thickening water-based wellbore fluids and is also soluble in both organic and aqueous fluids.
A first aspect of the present invention provides a surfactant having the formula (R1—X)nZ. R1 is an aliphatic group comprising a principal straight chain bonded at a terminal carbon atom thereof to X, the straight chain having a length such that a viscoelastic gel is formable by the surfactant in aqueous media; and further comprising at least one side chain (the carbon atoms of the side chain not being counted with the carbon atoms of the principal straight chain) which is shorter than said principal straight chain, said side chain enhancing the solubility of the surfactant in hydrocarbons, and being sufficiently close to said head group and sufficiently short such that the surfactant forms micelles in said viscoelastic gel. X is a charged head group, Z is a counterion, and n is an integer which ensures that the surfactant is charge neutral. Preferably the principal straight chain is a C16-C24 straight chain. Preferably the side chain is a C1-C2 side chain.
X may be a carboxylate (—COO−), quaternary ammonium (—NR2R3R4 +), sulphate (—OSO3 −), or sulphonate (—SO3 −) charged group; N being a nitrogen atom, and R2, R3 and R4 being C1-C6 aliphatic groups, or one of R2, R3 and R4 being a C1-C6 aliphatic group and the others of R2, R3 and R4 forming a five-or six-member heterocylic ring with the nitrogen atom.
When X is a carboxylate, sulphate, or sulphonate group, Z may be an alkali metal cation (in which case n is one) or an alkaline earth metal cation (in which case n is two). Preferably Z is Na+ or K+.
When X is a quaternary ammonium group, Z may be a halide anion, such as Cl− or Br−, or a small organic anion, such as a salicylate. In both these cases n is one.
Preferably the principal straight chain is a C16-C24 chain. More preferably it is a C18 or a C22 chain.
We have found that surfactants of this type are suitable for use as wellbore thickening agents, being soluble in both water and hydrocarbon-based solvents but retaining the ability to form aqueous viscoelastic solutions via micellar aggregation. This combination of properties is believed to be caused by the branching off from the principal straight chain of the C1-C6 side chain. The side chain apparently improves the solubility in hydrocarbon solvents by increasing the hydrophobicity of the R1 aliphatic group.
By “viscoelastic”, we mean that the elastic (or storage) modulus G′ of the fluid is greater than the loss modulus G″ as measured using an oscillatory shear rheometer (such as a Bohlin CVO 50) at a frequency of 1 Hz. The measurement of these moduli is described in An Introduction to Rheology, by H. A. Barnes, J. F. Hutton, and K. Walters, Elsevier, Amsterdam (1997).
In use, the enhanced solubility of the surfactant in hydrocarbon-based solvents can reduce the tendency for an emulsion to form between reservoir hydrocarbons and a broken surfactant gel based on the surfactant. It may also inhibit the formation of emulsions by natural surfactants in crude oil, such as naphthenic acids and asphaltenes. Additionally, dissolution of at least some of the surfactant molecules into the reservoir hydrocarbons can speed up breakdown of the gel.
Preferably, the side chain is a C1-C2 chain. We have found that, surprisingly, the solubility of the surfactant in hydrocarbon tends to increase as the size of the side chain decreases. We believe this is because smaller side chains cause less disruption to the formation of inverse micelles by the surfactant in the hydrocarbon, such inverse micelles promoting solubility in the hydrocarbon.
By altering the degree and type of branching from the principal straight chain, the surfactant can be tailored to be more or less soluble in a particular hydrocarbon. However, preferably the side chain is bonded to said terminal (α), neighbouring (β) or next-neighbouring (γ) carbon atom of the principal chain. More preferably it is bonded to the α carbon atom. We believe that locating the side chain close to the charged head group promotes the most favourable combinations of viscoelastic and solute properties.
Preferably the side chain is a methyl or ethyl group. There may be two side groups, e.g. a methyl and an ethyl group bonded to the α carbon atom.
The principal straight chain may be unsaturated.
Preferably the surfactant is an alkali metal salt of 2-methyl oleic acid or 2-ethyl oleic acid.
A second aspect of the invention provides a viscoelastic surfactant having a partition coefficient, Kow, of at least 0.05, Kow being measured at room temperature with respect to heptane and water. More desirably Kow is in the range from 0.05 to 1 and most desirably it is in the range 0.05 to 0.5. The surfactant may be a surfactant of the first aspect of the invention.
A third aspect of the invention provides an acid surfactant precursor to the surfactant of the first aspect of the invention, the acid surfactant precursor having the formula R1—Y. R1 is an aliphatic group comprising a C10-C25 principal straight chain bonded at a terminal carbon atom thereof to Y, and comprising at least one C1-C2 side chain. Y is a carboxylate (—COOH), sulphate (—OSO3H), or sulphonate (—SO3H) group.
In solution, acid surfactant precursors can be converted to the salt form, e.g. by neutralisation with the appropriate alkali or by the addition of the appropriate salt, to form surfactants of the first aspect of the invention.
A fourth aspect of the present invention provides a wellbore fluid comprising:
(b) a thickening amount of the surfactant of the first or second aspect of the invention, and
(c) an effective amount of a water-soluble, inorganic salt thickening activator.
Preferably the thickening activator is an alkali metal salt, such as KCl.
The surfactant is typically present in the fluid in a concentration of from 0.5 to 10 wt % (and more typically 0.5 to 5 wt %) and the thickening activator is typically present in the fluid in a concentration of from 1 to 10 wt %.
Desirably the wellbore fluid has a gel strength in the range 3 to 5 at room temperature, the gel strength falling to a value of 1 on contact with hydrocarbons such as heptane.
Desirably the wellbore fluid has a viscosity in the range 20 to 1000 (preferably 100 to 1000) centipoise in the shear rate range 0.1-100 (preferably 0.1-1000) s−1 at 60° C., the viscosity falling to a value in the range 1 to 200 (preferably 1 to 50) centipoise on contact with hydrocarbons such as heptane, the viscosity being measured in accordance with German DIN standard 53019.
A fifth aspect of the present invention provides for use of the wellbore fluid of the fourth aspect of the invention as a fracturing fluid, a lubricant or an emulsion breaker.
Specific embodiments of the present invention will now be described with reference to the following drawings in which:
Synthetic routes to α-, β- and γ-branched derivatives of various fatty acids are shown schematically in
A first step in a preparation of an α-branched derivative of a C10-C25 straight chain acid is the formation of an α-branch on the methyl ester of the acid. The α-branched ester can then be saponified with metal hydroxide to generate the acid salt (and thence the acid, if required).
The following examples describes in more detail the preparation and characterisation of 2-methyl oleic acid.
1. Preparation of 2-Methyl Methyl Oleate
Sodium hydride (60% dispersion, 8 g, 0.2 mol) was washed with heptane (2×15 ml) and then suspended in tetrahydrofuran (THF) (300 ml). 1,3-dimethyl-3,4,5,6-tetrahydro-2(1H)-pyrimidinone (DMPU) (26 g, 0.2 mol) was added and the mixture was stirred under an atmosphere of nitrogen. Methyl oleate (67.46 ml, 0.2 mol) was added dropwise over a period of two hours and the resulting mixture was heated to reflux for 12 hours and then cooled to 0° C. Methyl iodide (0.2 mol) was then added dropwise and the reaction mixture was again heated to reflux for a further two hours. Next the reaction mixture was cooled to 0° C. and quenched with water (15 ml), concentrated in vacuo and purified by column chromatography (SiO2, 1:9, diethyl ether:petroleum ether) to give 2-methyl methyl oleate as a yellow oil (50 g, 0.16 mol, 81%).
2. Preparation of 2-Methyl Oleic Acid
The 2-methyl methyl oleate from the above reaction (40 g, 0.13 mol) was dissolved in a (3:2:1) methanol, THF and water mixture (300 ml), and potassium hydroxide (14.4 g, 0.26 mol) was added and the reaction heated to reflux for 15 hours. The reaction mixture was then cooled and neutralised using dilute hydrochloric acid. The organic layer was separated and concentrated in vacuo, and was then purified by column chromatography (SiO2, (2:8) ethyl acetate:petroleum ether) to give 2-methyl oleic acid as an oil.
A rigid gel was formed when a 10% solution of potassium 2-methyl oleate (the potassium salt of the 2-methyl oleic acid prepared above) was mixed with an equal volume of a brine containing 16% KCl.
Contacting this gel with a representative hydrocarbon, such as heptane, resulted in a dramatic loss of viscosity and the formation of two low viscosity clear solutions: an upper oil phase and a lower aqueous phase. The formation of an emulsion was not observed. Thin-layer chromatography and infrared spectroscopy showed the presence of the branched oleate in both phases.
The gel is apparently broken by a combination of micellar rearrangement and dissolution of the branched oleate in the oil phase. Consequently the breaking rate of the branched oleate is faster than that of the equivalent unbranched oleate. This is demonstrated in
Gel strength is a semi-quantitative measure of the flowability of surfactant-based gel relative to the flowability of the precursor fluid before addition of the surfactant. There are four gel strength codings ranging from 1 (flowability of the original precursor fluid) to 4 (deformable, non-flowing gel). A particular gel is given a coding by matching the gel to one of the illustrations shown in
Using infra-red spectroscopy, the value of Kow for the potassium 2-methyl oleate of the broken branched gel was measured as 0.11. In contrast the value of Kow for the potassium oleate of the broken unbranched gel was measured as effectively zero.
The rapid breakdown of the branched oleate surfactant gels, with little or no subsequent emulsion, leads to the expectation that these gels will be particularly suitable for use as wellbore fluids, such as fluids for hydraulic fracturing of oil-bearing zones. Excellent clean up of the fluids and reduced impairment of zone matrix permeability can also be expected because emulsion formation can be avoided.
While the invention has been described in conjunction with the exemplary embodiments described above, many equivalent modifications and variations will be apparent to those skilled in the art when given this disclosure. Accordingly, the exemplary embodiments of the invention set forth above are considered to be illustrative and not limiting. Various changes to the described embodiments may be made without departing from the spirit and scope of the invention.
| 0.6266
|
FineWeb
|
June 4, 2009
Apes Help Scientists Discover Origins Of Laughter
When researchers set out to study the origins of human laughter, some gorillas and chimps were literally tickled to assist.
The scientists tickled 22 young orangutans, chimpanzees, gorillas, and bonobos, as well as three human infants, then acoustically analyzed the laughing sounds they produced.The results led researchers to conclude that people and great apes inherited laughter from a common ancestor that lived more than 10 million years ago.
Although the vocalizations varied, the researchers found that the patterns of changes fit with evolutionary splits in the human and ape family tree.
"This study is the first phylogenetic test of the evolutionary continuity of a human emotional expression," said Marina Davila Ross of the University of Portsmouth in the United Kingdom.
"It supports the idea that there is laughter in apes."
A quantitative phylogenetic analysis of the acoustic data produced by the tickled infants and apes revealed that the best "tree" to represent the evolutionary relationships among those sounds matched the known evolutionary relationships among the five species based on genetics. The researchers said that the findings support a common evolutionary origin for the human and ape tickle-induced expressions.
They also provide evidence that laughter evolved slowly over the last 10 to 16 million years of primate evolutionary history.
Nevertheless, human laughter is acoustically distinct from that of great apes and reached that state through an evident exaggeration of pre-existing acoustic features after the hominin separation from ancestors shared with bonobos and chimps, about 4.5 to 6 million years ago, Ross said.
For example, humans make laughter sounds on the exhale. Although chimps do that as well, they can also laugh with an alternating flow of air, both in and out. Humans also use more regular voicing in comparison to apes, meaning that the vocal cords regularly vibrate.
Ross said the researchers were surprised to find that gorillas and bonobos can sustain exhalations during vocalization that are three to four times longer than a normal breath cycle -- an ability that had been thought to be a uniquely human adaptation, important to our capacity to speak.
"Taken together," the researchers wrote, "the acoustic and phylogenetic results provide clear evidence of a common evolutionary origin for tickling-induced laughter in humans and tickling-induced vocalizations in great apes. While most pronounced acoustic differences were found between humans and great apes, interspecific differences in vocal acoustics nonetheless supported a quantitatively derived phylogenetic tree that coincides with the well established, genetically based relationship among these species. At a minimum, one can conclude that it is appropriate to consider 'laughter' to be a cross-species phenomenon, and that it is therefore not anthropomorphic to use this term for tickling-induced vocalizations produced by the great apes."
The research was reported online on June 4th in Current Biology, a Cell Press publication.
On the Net:
| 0.7468
|
FineWeb
|
Inaugural Lecture and Reception: Professor Rebecca Sweetman, School of Classics
Professor Rebecca Sweetman of the School of Classics will give her Inaugural Lecture 'Sailing the Wine-Dark Sea: the Archaeology of Roman Crete and the Cyclades'.
Commonly perceived as pawns in wider imperial machinations, Crete and the Cyclades have often been side-lined as peripheral due to their assumed seclusion. However, even a brief analysis of the archaeological evidence indicates that these islands not only played significant roles within the wider Roman Empire, but in some cases, they flourished as a result. Furthermore, these islands experienced the monumentalized manifestation of Christianity much earlier than their mainland counterparts to the west. This unexpected success can be seen in terms of resilience. To establish why this is the case, it is necessary to shed the bias of preconceived notions of insularity. In doing so, this allows the significant variety of communication networks the islands had to be identified. Following a brief introduction to the methodologies, topography and fieldwork, in this talk I will focus on how island resilience helped shaped the success stories of Crete and the Cyclades in the Roman and Late Antique periods.
| 0.8689
|
FineWeb
|
Marketing plays a vital role in the product or service development process.
- Who else is making similar widgets;
- How to differentiate the planned widget from all other widgets;
- What the potential market is for the planned widget, and;
- What the price should be.
These last two bullets are vital to ensure that the company can recover the widget development costs and then make a profit. As to differentiation from other widgets, Marketing should be part of all design reviews throughout the life of the widget to ensure that the development roadmap keeps at least one step ahead of the competition.
The alternative is a Dilbert-like organization where engineering develops a new product and then tosses it over the fence to marketing. Marketing and Sales are then supposed to sell something that is basically unsellable.
| 0.9455
|
FineWeb
|
In an attempt to prevent shark attacks, the Australian government has proposed a plan that can only be described as horrific.
Shark attacks are scary. Despite what “Jaws” taught us, however, they’re also extremely rare. Global statistics show that wasps, toasters, chairs, domestic dogs and even falling coconuts kill far more people every year than sharks. But that didn’t stop Western Australia’s government from buying into the hysteria by proposing a plan that is both barbaric and ecologically devastating.
There have been six fatal shark attacks in Australian waters over the past two years. In response, officials in Western Australia have proposed a highly controversial “shark management” plan that calls for the slaughter of any shark longer than 3 meters (9.8 feet) found swimming anywhere near popular beaches. According to the Guardian, sharks unlucky enough to get hooked on baited drum lines will be ‘humanely destroyed’ with a firearm. Them the shark corpses will be then tagged and taken further out to sea and dumped.
For just a moment, let’s set aside the glaring fact that sharks have called the ocean home for over 400 million years, and that Australians are encroaching on their habitat, and not the other way around. Instead, let’s focus on huge impact this plan will have on the ocean ecosystem, and the very slim chance it will actually reduce attacks.
“As predators, [sharks] shift their prey’s spatial habitat, which alters the feeding strategy and diets of other species,” explains Oceana. “Through the spatial controls and abundance, sharks indirectly maintain the seagrass and corals reef habitats. The loss of sharks has led to the decline in coral reefs, seagrass beds and the loss of commercial fisheries.”
Around the global, growing awareness about the sharp decline of shark populations has led to a surge in conservation efforts. Shark finning, spurred by the demand for shark fin soup, has been banned in several significant regions, and there’s been a successful push to establish shark sanctuaries.
“While the rest of the world is turning to shark conservation, our government is sticking his head in the sand, ignoring all the experts and employing an archaic strategy,” Ross Weir, founder of Western Australians for Shark Conservation, told TIME magazine. “What they are doing is illegal and violates 15 different United Nations conventions and treaties.”
There’s also nothing to suggest that killing sharks will actually stop shark attacks. “…what will the killing of this one shark achieve? There is absolutely no evidence to support the “rogue shark” theory, sharks are no more or less likely to bite a human if they have bitten before. It will not act as a deterrent for other sharks,” blogged Dr. Rachel Robbins, chief scientist of the Fox Shark Research Foundation.
“The way to reduce attacks is not to kill anything that poses a threat to us. It is to educate people on how to minimize their risk, the times of day and conditions under which attacks are most likely to occur, put warnings at beaches that these areas are known to be frequented by white sharks.”
Related on Ecosalon
| 0.6673
|
FineWeb
|
|Factorization||2 × 17|
|Divisors||1, 2, 17, 34|
34 is the ninth distinct semiprime and has four divisors including one and itself. Its neighbors, 33 and 35, also are distinct semiprimes, having four divisors each, and 34 is the smallest number to be surrounded by numbers with the same number of divisors as it has. It is also in the first cluster of three distinct semiprimes, being within 33, 34, 35; the next such cluster of semiprimes is 85, 86, 87.
It is the ninth Fibonacci number and a companion Pell number. Since it is an odd-indexed Fibonacci number, 34 is a Markov number, appearing in solutions with other Fibonacci numbers, such as (1, 13, 34), (1, 34, 89), etc.
Thirty-four is a heptagonal number.
- The atomic number of selenium
- One of the magic number in physics.
- Messier object M34, a magnitude 6.0 open cluster in the constellation Perseus
- The New General Catalogue object NGC 34, a peculiar galaxy in the constellation Cetus
- The Saros number of the solar eclipse series which began on 1917 BC August and ended on 384 BC February. The duration of Saros series 34 was 1532.5 years, and it contained 86 solar eclipses.
- The Saros number of the lunar eclipse series which began on 1633 BC May and ended on 335 BC June. The duration of Saros series 34 was 1298.1 years, and it contained 73 lunar eclipses.
- The jersey number 34 has been retired by several North American sports teams in honor of past playing greats or other key figures:
- In Major League Baseball:
- The Houston Astros and Texas Rangers, both for Hall of Famer Nolan Ryan.
- The Minnesota Twins, for Hall of Famer Kirby Puckett.
- The Oakland Athletics and Milwaukee Brewers, both for Hall of Famer Rollie Fingers.
- Additionally, the Los Angeles Dodgers have not issued the number since the departure of Fernando Valenzuela following the 1990 season. Under current team policy, Valenzuela's number is not eligible for retirement because he is not in the Hall of Fame.
- In the NBA:
- In the NFL:
- In the NCAA:
- In Major League Baseball:
- 34th Street (Manhattan), a major cross-town street in New York City
- 34th Street (New York City Subway), multiple New York City subway stations
In other fields
34 is also:
- The traffic code of Istanbul, Turkey
- "#34", a song by the Dave Matthews Band
- The number of the French department Hérault
- +34 is the code for international direct-dial phone calls to Spain
- Higgins, Peter (2008). Number Story: From Counting to Cryptography. New York: Copernicus. p. 53. ISBN 978-1-84800-000-1.
- "Evidence for a new nuclear ‘magic number’" (Press release). Saitama, Japan: Riken. 2013-10-10. Retrieved 2013-10-14.
- Steppenbeck, D.; Takeuchi, S.; Aoi, N.; et al. (2013-10-10). "Evidence for a new nuclear ‘magic number’ from the level structure of 54Ca". Nature 502: 207–210. doi:10.1038/nature12522. Retrieved 2013-10-14.
|Wikimedia Commons has media related to 34 (number).|
| 0.9163
|
FineWeb
|
The old wastewater treatment plant in Prague-Bubenec
The old wastewater treatment plant in Prague-Bubeneč is an important witness to the history of architecture, technology and water management. Built in 1901-1906, it was used for the treatment of most of the sewage water in the city of Prague until 1967. In the steam engine room one can view the still functioning machines from the early 20th century. The design of the sewer system with the proposed technical parameters of the treatment plant was prepared by a construction engineer of British origin, Sir William Heerlein Lindley. In 2010 his work was declared a cultural monument. Old plant is one of the most important industrial heritage sites in Europe.
The well preserved building of the old wastewater treatment plant in Bubeneč is the oldest preserved facility of its kind in Europe, a unique industrial architecture, a unique Eco monument of world importance, which is interesting both from architectural and technological points of view. Already in 1884, the competition was announced for the project of a new sewerage system and wastewater treatment plant, several projects were drafted but only the project of the famous English engineer William Henry Lindley was implemented – he had a lot of practical experiences from other big European cities and used some positive elements of previous projects of Czech designers in his project. His system of Prague sewerage network used catchment ratios so that sewage pumping was not necessary. The sewerage network discharged in the new wastewater treatment plant in Bubeneč. At that time Prague’s sewerage system measured about 90 km. The area of the wastewater treatment plant by Lindley project was built in 1900 – 1906 as a part of the new Prague sewerage system that was designed for 700 000 inhabitants. The sedimentation treatment plant in Bubeneč was the first major water treatment building in Bohemia.
It consists of a main operation building with two chimneys, a smoke chimney and a ventilation chimney. Under the ground there is the six feet deep sand trap, ten underground septic tanks, two wells and sewage sludge pump shafts. The sludge from the sedimentation tank was pumped to two sludge tanks on the Emperor’s Island or to ships and those transported it to other sludge tanks, from where they were sold (after drying) as a highly demanded fertilizer. The railway branch led to the sludge tanks on the Emperor’s Island. Then three-stage cleaning efficiency was about 40%. The capacity of the wastewater plant started not to be sufficient from the 1920s and consequently only an extension was built before the World War II. A brand new wastewater treatment plant was built much later, namely in 1967. Today’s sewerage system is about 2,400 km long, whereas a part of sewage conduits is man-sized, i.e. greater than 80 cm; other sewage conduits are lower, i.e. less than man-sized. It has about 55,000 manholes and only 19 pumping stations. Today’s wastewater treatment plants reach the efficiency of 90 to 95%. The original wastewater treatment plant area was still in good condition, and so it has been maintained next to the new one. Thus it was possible to establish a foundation in 1992, the mission of which was to operate the Eco-museum in this precious building.
Visitors to the museum come through the inlet crypt, where the water wheel driven by the incoming sludge used to be fitted and consequently the sludge come into the largest underground construction – into a sand trap, where three main municipal sewers discharged. From there they pass to the discharge sluices and mechanical rack catchers and then go down to ten sedimentation tanks, where the primary sludge used as a fertilizer settled. The highlight of the tour is two-storey engine room with two reconstructed steam engines installed in 1904, both still functional, below which there are flood pumps. Also the steam boiler room with the two coal boilers is still functional.
| 0.5434
|
FineWeb
|
Welcome to the 7 Day Challenge. For 7 days, we are testing our Emergency Preparedness and Food Storage Plans. Each day will bring a NEW mock emergency, or situation that will test at least one of the reasons “WHY” we strive to be prepared! REMEMBER: No going to a store, or spending any money for the entire 7 days! And please feel free to adapt the scenarios to fit your own family and situation.
You just discovered that you have some kind of allergy to an unknown preservative. Since you aren’t able to isolate what it is exactly, you now need to avoid ALL preservatives and start cooking all of your food from scratch. This includes making a loaf of bread. Remember, no going to the store. **A little rule of thumb: you have to know where the ingredient comes from, and be able to pronounce the ingredients on any canned item you use (meaning a can of tomatoes is ok, but not a can of spaghetti sauce)**
- Cook breakfast from scratch
- Cook lunch from scratch
- Cook dinner from scratch
- Bake a loaf of homemade bread
- Print out some of your favorite recipes to use in case the internet is down during an emergency
- For this day, and ALL days of the challenge: no spending money, no going to stores, and no restaurants.
- Do not use ANY pre-packaged or convenience-type foods. No mixes, boxed cereals, canned soups or sauces etc. If you can’t pronounce all the ingredients and say where it came from, it’s probalby a NO go.
- Do not buy or borrow ingredients. Use only what you have stored.
- Make a delicious dessert from scratch.
- Plan an entire week worth of meals you could make out of your current food supplies.
- Do some research on the health benefits of eating less preservatives.
REMEMBER, TOMORROW’S CHALLENGE WILL BE DIFFERENT.
How long would you have lasted under these conditions?
Make sure your fill out today’s Report Card to see how well you did, to keep track of areas you can improve, to remember things you need to do, and things you need to buy. Use the data to make a game plan to take you to the next level of preparedness, whatever that may be.
| 0.5049
|
FineWeb
|
Containing large floor discs and smaller handheld matching discs, this material challenges childrens’ sense of touch on both hands and feet. At the same time, it develops the ability to describe sense impressions verbally. Games can be adjusted to fit any child’s
Great for those who might have a slight fear of dark places. Children can feel less confined than in a solid wall structure where they can’t see what is happening outside.
These floor tiles will create a fun and exciting environment as children see cause and effect of the internal liquids moving. They are excellent for creating sensory play spaces, quiet reading areas and for encouraging exploratory play. Children are encouraged to
A versatile toy shaped like a turtle shell, with numerous uses. Sit and rock/spin in it, fill it with sand, use it in water play, upturn and stand on it. Good for balancing skills.
A multi-purpose board to assist with exercise, balance and creative play.
A multi-purpose board to help with exercise, balance and creative play
An inflatable ‘ball’ in the shape of a star. Good for hand/eye coordination when throwing and catching.
Can be used for general fun and team building exercises allowing children to work together in a group play situation.
Can be played with friends or by oneself against a wall – indoors or outside. It helps children with hand-eye coordination, timing and encourages active play.
This is an indoor training tool to develop gross motor skills, balance and posture; it can also help release children’s anxiety and anger in a safe environment. Hang the target mat onto a sturdy and stable place such as the wall
| 0.6534
|
FineWeb
|
Addressing Issues of Diversity in Curriculum Materials
and Teacher Education
David McLaughlin (MSU), James Gallagher (MSU), Mary Heitzman (UM), Shawn Stevens (UM), and Su Swarat (NU)
Aikenhead, G. (2001). Integrating Western and Aboriginal sciences: Cross-cultural science teaching.
Research in Science Education, 31, 337-355.
The article addresses issues of social power and privilege experienced by Aboriginal students in science classrooms. A rationale for a cross-cultural science education dedicated to all students making personal meaning out of their science classrooms is presented. The author then describes a research and development project for years 6-11 that illustrates cross-cultural science teaching in which Western and Aboriginal sciences are integrated.
Ball, D. L., & Cohen, D. K. (1996). Reform by the book: What is – or might be – the role of curriculum materials in teacher learning and instructional reform? Educational Researcher, 25(9), 6-8, 14. The authors describe the uneven role of curriculum materials in practice and adopt the perspective that curriculum materials could contribute to professional practice if they were created with closer attention to processes of curriculum enactment. “Educative curriculum materials” place teachers in the center of curriculum construction and make teachers’ learning central to efforts to improve education. Curriculum use and construction are framed as activities that draw on teachers’ understanding and students’ thinking.
Barab, S. A., & Luehmann, A. L. (2003). Building sustainable science curriculum: Acknowledging and accommodating local adaptation. Science Education, 87(4), 454-567. Developing and supporting the implementation of project-based, technology-rich science curriculum that is consistent with international calls for a new approach to science education while at the same time meeting the everyday needs of classroom teachers is a core...
| 0.8633
|
FineWeb
|
When sizing a motor for any application there are a lot of factors to consider. Requirements such as speed, torque, frame-size, ramp-up and load all need to be carefully considered. But the first consideration when choosing a motor is understandably how much work can be performed by said motor. The amount of “work” an electric motor can perform is measured in horsepower. When assisting a customer with sizing a motor we often get asked how to determine horsepower because some motor data plates do not clearly state this value. Luckily by using a simple bit of math you can quickly determine horsepower using minimal information. Specifically, the amperage and voltage rating of a motor.
Step One: Determine your Motor’s Wattage
The first step toward determining horsepower is first determining another value by which rate of work is measure called a watt. Named after the famous Scottish inventor James Watt, the watt is a unit of measure that is used to quantify energy transfer in a system. To determine wattage in a motor you must multiply amperage rating by the voltage rating.
V X A = W
Example: 460V X 30A = 13,800 Watts
Step Two: Factor in Efficiency Rating
At its core an electric motor’s job is converting electrical energy into mechanical energy a machine can use to perform work. Unfortunately, no motor is 100% efficient and there are inherent losses to work potential that must be factored in. When listed on a motor data plate this value is most often represented as a percentage. When you see a motor efficiency rating you must convert from a percentage to a decimal for the purposes of this equation. For instance 85% efficiency would be .85 efficiency. Add it to your wattage calculation like so:
V X A X E = W
Example: 460V X 30A X .85 = 11,730
Step Three: Converting Wattage into Horse Power
Lastly, we need to convert wattage into horsepower. 756 watts roughly equal one horsepower. Taking the example above we can take our calculated wattage of 11,730 and divide it by 756. What we end up with is 15.515 or right around 15 Horsepower rounded down. That would mean if we had an example motor rated at 460 volts, and 30 Amps with an efficiency rating of 85 percent this motor would be a 15 horsepower motor.
If you need help with your motor, whether with sizing or if your in need of having it repaired, the professionals at Global Electronic Services are here to help! Be sure to visit us online at www.gesrepair.com or call us at 1-877-249-1701 to learn more about our services. We’re proud to offer Surplus, Complete Repair and Maintenance on all types of Industrial Electronics, Servo Motors, AC and DC Motors, Hydraulics and Pneumatics. Please subscribe to our YouTube page and Like Us on Facebook! Thank you!
| 0.6721
|
FineWeb
|
Llevamos toda la vida escuchando lo buena que era la vitamina C para prevenir el catarro. Gracias a las investigaciones del Dr. Linus Pauling en 1970, la popularidad de la vitamina C fue imparable. La dosis necesaria para evitar el catarro común eran 1000 miligramos al día.
Se han publicado numerosas investigaciones que desmontaban ese mito, pero ahora la Cochrane ha publicado una revisión concluyente:
Douglas RM, Hemilä H, Chalker E, Treacy B. Vitamin C for preventing and treating the common cold. Cochrane Database of Systematic Reviews 2007, Issue 3.
Aquí tenéis el resumen en inglés:
This Cochrane review found that taking vitamin C regularly has no effect on common cold incidence in the ordinary population.
It reduced the duration and severity of common cold symptoms slightly, although the size of the effect was so small its clinical usefulness is uncertain. The authors investigated whether oral doses of 0.2 g or more daily of vitamin C reduces the incidence, duration or severity of the common cold when used either as continuous prophylaxis or after the onset of symptoms. The review included studies using a vitamin C dose of greater than 0.2g per day and those with a placebo comparison.
• For the prophylaxis of colds, the authors carried out a meta-analysis of 30 trials comparisons involving 11,350 study participants. The pooled relative risk (RR) of developing a cold whilst taking prophylactic vitamin C was 0.96 (95% confidence intervals (CI) 0.92 to 1.00). However, a subgroup of six trials involving a total of 642 marathon runners, skiers, and soldiers on sub-arctic exercises reported a pooled RR of 0.50 (95% CI 0.38 to 0.66) i.e. a 50% reduction in the risk of a cold for this group of people.• For the duration of the common cold during prophylaxis, the authors carried out a meta-analysis using 30 comparisons involving 9676 respiratory episodes. They found a consistent benefit with a reduction in cold duration of 8% (95% CI 3% to 13%) for adults and 13.6% (95% CI 5% to 22%) for children.• For the duration of cold during therapy with vitamin C started after symptom onset, the authors carried out a meta-analysis of 7 trials involving 3294 respiratory episodes. No significant differences from placebo were seen.• No significant differences were seen in a meta analysis of 4 trial comparisons involving 2753 respiratory episodes in cold severity during therapy with vitamin C.
The authors conclude, “The failure of vitamin C supplementation to reduce the incidence of colds in the normal population indicates that routine mega-dose prophylaxis is not rationally justified for community use. But evidence suggests that it could be justified in people exposed to brief periods of severe physical exercise or cold environments.”
| 0.9772
|
FineWeb
|
Micro PET-CT Camera
After success operation of the prototype MDAPET Camera, we developed a low-cost, high-sensitivity and high-resolution dedicated animal PET camera (RRPET). In 2006, we successfully completed the construction of the RRPET camera and it was then commercialized as the world’s first animal PET-CT (XPET) scanner. This camera is based on the PQS concept that was first used in construction of MDAPET Camera
- Photomultiplier-Quadrant-Sharing detector design
and the SSB technique that was first introduced in construction of HOTPET Human Camera for building the detector blocks more efficiently:
- Slab-sandwich-slice (SSS) production technique
The RRPET camera consists of 180 BGO (Bismuth Germanate) blocks arranged in 48 rings.
See RRPET specifications, RRPET images and RRPET performance.
| 0.7474
|
FineWeb
|
The trade-off between pleiotropy and redundancy in telecommunications networks is analyzed in this paper. They are optimized to reduce installation costs and propagation delays. Pleiotropy of a server in a telecommunications network is defined as the number of clients and servers that it can service whilst redundancy is described as the number of servers servicing a client. Telecommunications networks containing many servers with large pleiotropy are cost-effective but vulnerable to network failures and attacks. Conversely, those networks containing many servers with high redundancy are reliable but costly. Several key issues regarding the choice of cost functions and techniques in evolutionary computation (such as the modeling of Darwinian evolution, and mutualism and commensalism) will be discussed, and a future research agenda is outlined. Experimental results indicate that the pleiotropy of servers in the optimum network does improve, whilst the redundancy of clients do not vary significantly, as expected, with evolving networks. This is due to the controlled evolution of networks that is modeled by the steady-state genetic algorithm; changes in telecommunications networks that occur drastically over a very short period of time are rare.
| 0.9991
|
FineWeb
|
The vegetarian recipe for Easy Eggless Cream Cheese Cupcakes:
servings – 24 eggless cupcakes
- 1 block cream cheese (suitable for vegetarians)
- 1 tbsp butter
- 2 cups fresh milk
- 1/2 cup sugar
- 1 tsp vanilla essence
- 1 tsp baking soda (sieve)
- 4 cups self-rising flour (sieve)
- Mix the cream cheese with butter, fresh milk, sugar, vanilla essence and baking soda.
- Add self-rising flour and stir until a smooth batter is formed.
- Place the paper cupcake cases in the metal cupcake molds and pour the batter into the cases.
- Bake in a preheated oven at 170°C for about 20 minutes.
- Remove and let cool.
- 2 blocks cream cheese (suitable for vegetarians)
- 1/2 block butter
- 3/4 cup icing sugar
- 4 drops red liquid food coloring (optional)
- Mix the cream cheese with the butter, icing sugar and red liquid food coloring.
- Stir until a smooth frosting is formed.
- Pipe the frosting onto the cooled cream cheese cupcakes.
| 0.9251
|
FineWeb
|
Latest photos on AncientFaces
No one from the Quiles-cruz community has shared photos. Here are new photos on AncientFaces:
Quiles-cruz Surname History
The family history of the Quiles-cruz last name is maintained by the AncientFaces community. Join the community by adding to to this genealogy of the Quiles-cruz:
- Quiles-cruz family history
- Quiles-cruz country of origin, nationality, & ethnicity
- Quiles-cruz last name meaning & etymology
- Quiles-cruz spelling & pronunciation
- genealogy and family tree
Quiles-cruz Country of Origin, Nationality, & Ethnicity
No one has submitted information on Quiles-cruz country of origin, nationality, or ethnicity. Add to this section
No content has been submitted about the Quiles-cruz country of origin. The following is speculative information about Quiles-cruz. You can submit your information by clicking Edit.
The nationality of Quiles-cruz may be very difficult to determine in cases which country boundaries change over time, leaving the original nationality a mystery. The original ethnicity of Quiles-cruz may be in dispute based on whether the name came in to being organically and independently in different locales; for example, in the case of names that come from a professional trade, which can come into being in multiple countries independently (such as the family name "Brewster" which refers to a female brewer).
Quiles-cruz Meaning & Etymology
No one has submitted information on Quiles-cruz meaning and etymology. Add to this section
No content has been submitted about the meaning of Quiles-cruz. The following is speculative information about Quiles-cruz. You can submit your information by clicking Edit.
The meaning of Quiles-cruz come may come from a profession, such as the name "Archer" which was given to people who were bowmen. Some of these profession-based last names may be a profession in another language. This is why it is important to research the ethnicity of a name, and the languages spoken by its early ancestors. Many modern names like Quiles-cruz come from religious texts like the Bhagavadgītā, the Quran, the Bible, and so on. Often these surnames are shortened versions of a religious expression such as "Favored of God".
Quiles-cruz Pronunciation & Spelling Variations
No one has added information on Quiles-cruz spellings or pronunciations. Add to this section
No content has been submitted about alternate spellings of Quiles-cruz. The following is speculative information about Quiles-cruz. You can submit your information by clicking Edit.
In early history when few people could write, names such as Quiles-cruz were written down based on their pronunciation when people's names were written in court, church, and government records. This could have given rise misspellings of Quiles-cruz. Understanding spelling variations and alternate spellings of the Quiles-cruz name are important to understanding the history of the name. Last names like Quiles-cruz vary in how they're said and written as they travel across tribes, family branches, and countries across time.
Last names similar to Quiles-cruzQuilesdmontalvo Quilesfalicea Quilesfcruz Quilesfgandulla Quilesfgonzalez Quilesfmartinez Quilesfmontalvo Quilesfnieves Quilesfperez Quilesframos Quilesfrankie Quilesfreyes Quilesfrivera Quilesfrobles Quilesfruiz Quilesfsoto Quilesfvazquez Quilesgonz Quilesgonzal Quilesgonzalez
Quiles-cruz Family Tree
Here are a few of the Quiles-cruz genealogies shared by AncientFaces users. Click here to see more Quiles-cruzes
| 0.5907
|
FineWeb
|
Re: Dynamic Userform Design
hmmmmmm, I REALLY don't like some of what you're doing, and it's hard to tell if thats just "not the way I'd do it" or actually wrong.
Part of your problem could be the unload userform2 command in the cmbClass module. I THINK, the instance of the is object is part of the userform2 object, so when you unload userform2 you are attempting to unload something that is currently executing.
When I try it, if I move the msgbox unloading to ABOVE the unload command, I SEE the msg before excel dies, I don't see the message after. So the unload is dying, and I SUSPECT it's sying because you are loading something running.
| 0.6671
|
FineWeb
|
Historical trauma, or intergenerational trauma, refers to the cumulative emotional and psychological wounding of a person or generation caused by traumatic experiences or events. Historical trauma can be experienced by any group of people that experience a trauma. Examples include genocide, enslavement, or ethnic cleansing. It can affect many generations of a family or an entire community. Historical trauma can lead to substance abuse, depression, anxiety, anger, violence, suicide, and alcoholism within the afflicted communities. If you are feeling the effects of historical or intergenerational trauma, reach out to one of TherapyDen’s experts today.
| 0.9761
|
FineWeb
|
And he dreamed, and behold! a ladder set up on the ground and its top reached to heaven; and behold, angels of God were ascending and descending upon it.
Last night I dreamed of an atom with a ladder wedged in the nucleus of the atom, with electrons jumping up and down the ladder.
For those readers unencumbered by the knowledge of atomic theory, a brief historical introduction may be in order. When the planetary theory of the atom was fir
st proposed by Ernest Rutherford in 1909, it depicted an atom as a solar system wherein a nucleus was positioned at the center of the atom, with electrons orbiting around the nucleus as planets orbit the Sun. However, there was a problem. According to Maxwell’s theory of electromagnetism, accelerating electrons emit electromagnetic waves thereby losing their energy. In Rutherford’s model, all electrons were doomed to fall on the nucleus, which, of course, did not happen. In 1913, Niels Bohr solved this problem by postulating that electrons were only allowed to occupy certain orbits with discrete energy levels. An electron can jump on higher or lower orbit (by absorbing or emitting a photon) but normally orbiting the nucleus without losing energy.
I don’t know if Niels Bohr read Torah, but if he did, this week’s portion may have inspired his insight. In Jacob’s dream, he saw a ladder wedged in the earth with angels moving up and down the ladder. One may ask, why would angels need a ladder to move up or down? In Ezekiel’s vision of Ma’ase Markava, angels used their wings to fly to and fro, without the need of a ladder. So why did angels need a ladder in the dream of Jacob (Yaakov)? Perhaps it provides the symbolism for Bohr’s model of the atom. In my dream, the earth was the nucleus, angels were electrons, and rungs of the ladder were energy levels corresponding to orbits that electrons are allowed to occupy. In Jacob’s vision, angels didn’t fly (change their energy level continuously) but stepped up or down the ladder—one rung at a time. It seems to me, this is symbolic of electrons not being allowed to change their energy continuously but only being able to jump up or down one orbit, which is symbolized by the rungs of the ladder.
To take this metaphor a bit further, let us notice that when an electron jumps to a higher orbit, it absorbs a photon. When the electron jumps to a lower orbit, it emits a photon. According to the Zohar, Jacob’s ladder was the ladder of prayer. Angels going up the ladder brought up the prayers to heaven. Angels going down the ladder brought back the blessings. If photons—quanta of light—are symbolic of prayers and blessings, angels carrying the prayers up the ladder are symbolic of electrons going up the orbit as a result of being irradiated by photons (prayers). Likewise, just as angels going down the ladder carry down blessing, electrons, jumping on lower orbits, irradiate photons of light—blessings.
Philo of Alexandria (a.k.a. Philo Judaeus) offered another mystical symbolism of the Jacob’s ladder—angels carrying up souls of departed people ascending to heaven or carrying down to earth souls destined to be born. This interpretation also fits well with our atomic
metaphor. Indeed, a photon (symbolic of a person) absorbed by an electron dies, as it were, and only its energy (soul) is carried up by the electron to a higher orbit. Conversely, when an electron jumps down to a lower orbit, the extra energy (soul) causes the electron to emit a photon—symbolic of giving birth to a person in whom the soul incarnates.
| 0.9352
|
FineWeb
|
In Kotlin, the concept of nullable types plays a crucial role in enhancing the safety and expressiveness of the language. This article aims to provide a comprehensive understanding of nullable types in Kotlin and how they contribute to writing more robust and reliable code.
What Are Nullable Types?
Nullable types in Kotlin allow variables to hold null values, providing a clear distinction between nullable and non-nullable types. This feature helps prevent null pointer exceptions, a common source of bugs in many programming languages.
Declaring Nullable Types
In Kotlin, to declare a variable as nullable, you append a question mark (
?) to its type. For example,
var name: String? declares a nullable string variable.
Safe Calls and the Elvis Operator
One of the key features of nullable types is the safe call operator (
?.). It allows you to safely perform operations on a nullable variable without the risk of a null pointer exception. Additionally, the Elvis operator (
?:) provides a concise way to handle null values by specifying a default value if the variable is null.
Type Checks and Smart Casts
Kotlin introduces smart casts, a mechanism that automatically casts a nullable type to a non-nullable type within a certain code block if a null check has been performed. This eliminates the need for explicit casting and enhances code readability.
The !! Operator and its Risks
While nullable types offer safety, the double exclamation mark (
!!) operator allows you to forcefully assert that a nullable variable is non-null. However, this should be used cautiously, as it may lead to null pointer exceptions if the assertion is incorrect.
Working with Nullable Types in Collections
Kotlin’s standard library provides powerful tools for working with collections of nullable types. Functions like
mapNotNull make it convenient to handle nullable elements within collections.
Nullable Types in Function Parameters
When defining functions in Kotlin, you can explicitly specify whether parameters accept nullable types. This helps in creating functions that are more flexible and adaptable to different use cases.
Migrating Existing Code to Use Nullable Types
For developers transitioning to Kotlin or updating existing code, understanding nullable types is crucial. This section explores best practices and strategies for migrating code to leverage the benefits of nullable types.
| 0.9637
|
FineWeb
|
In today’s fast-paced world, productivity is a key aspect that drives success in various domains. Technology continues to evolve, introducing innovative solutions to enhance efficiency and streamline workflows.
One such groundbreaking advancement is Microsoft AI Copilot, an intelligent tool that revolutionizes productivity in the digital era. In this article, we will explore the features, installation process, and benefits of Microsoft AI Copilot while shedding light on how it empowers users to accomplish tasks more efficiently than ever before.
The Evolution of Productivity Tools
Over the years, productivity tools have undergone significant transformations. From the early days of basic word processors to modern-day collaboration platforms, the aim has always been to enhance efficiency and simplify work processes. Microsoft AI Copilot takes productivity to a whole new level by leveraging the power of AI to provide intelligent assistance and automate various tasks, reducing manual effort and boosting productivity.
Key Features of Microsoft AI Copilot
1. Real-time Assistance
Microsoft AI Copilot offers real-time suggestions and recommendations as you work, helping you complete tasks more efficiently. It analyzes your actions, understands context, and provides relevant suggestions based on best practices and user patterns.
2. Code Completion and Generation
For software developers, AI Copilot proves to be an invaluable companion. It assists in code completion, automatically generating code snippets, and offering intelligent suggestions to speed up the development process. This feature significantly reduces the time spent on writing repetitive code and enhances the overall coding experience.
3. Contextual Documentation
AI Copilot provides contextual documentation, offering relevant code examples, explanations, and references within the development environment. This feature eliminates the need for constant switching between different resources, enabling developers to access necessary information seamlessly.
4. Task Automation
Repetitive and mundane tasks can hinder productivity and creativity. AI Copilot automates such tasks, freeing up valuable time for users to focus on more critical aspects of their work. It can automate tasks like formatting, refactoring, and debugging, enabling users to complete them swiftly and accurately.
5. Natural Language Support
AI Copilot understands natural language queries and instructions, making it easier to interact with the tool. Users can simply describe the task or ask for specific assistance, and AI Copilot will provide relevant suggestions or perform the requested action.
Installation and Access
To install and access Microsoft AI Copilot, follow these simple steps:
With these easy steps, you can quickly install and start using Microsoft AI Copilot to boost your productivity.
Benefits of Microsoft AI Copilot
Enhancing Collaboration and Efficiency
Microsoft AI Copilot promotes collaboration by providing suggestions and insights that align with best practices and coding standards. It assists in creating consistent and high-quality code, even when working in teams. By streamlining collaboration and reducing errors, AI Copilot enables developers to work together seamlessly, resulting in increased efficiency and better code quality.
Simplifying Complex Tasks
Complex tasks often require extensive research and expertise. With AI Copilot, users can simplify such tasks by leveraging its intelligent assistance. Whether understanding complex code structures, navigating detailed documentation, or implementing advanced algorithms, AI Copilot offers the necessary support and guidance to tackle complex challenges easily.
Customizing AI Copilot for Personalized Workflows
Microsoft AI Copilot understands that every user has unique preferences and work patterns. It provides customization options, allowing users to tailor the tool according to their specific needs. Users can adjust the level of suggestions, enable/disable certain features, and personalize the tool’s behaviour, ensuring it aligns perfectly with their individual workflows.
Addressing Privacy and Security Concerns
With the increasing reliance on AI technologies, privacy and security are of paramount importance. Microsoft AI Copilot prioritizes user privacy and data protection. It operates within strict security measures, ensuring that sensitive information remains confidential and secure. Microsoft is committed to maintaining the highest privacy and data protection standards across all its products and services.
The Future of Microsoft AI Copilot
Microsoft AI Copilot represents the future of productivity tools. As technology advances, AI Copilot will evolve and adapt to meet the ever-changing needs of users. We can expect further enhancements, additional integrations with popular software, and improved support for different industries and domains. Microsoft’s dedication to innovation ensures that AI Copilot will remain at the forefront of revolutionizing productivity tools for the future.
Microsoft AI Copilot is a game-changer in the realm of productivity tools. Its intelligent assistance, code generation capabilities, and task automation features empower users to accomplish more with greater efficiency. By streamlining workflows, enhancing collaboration, and simplifying complex tasks, AI Copilot proves to be an invaluable companion for professionals across various domains. Embrace the future of productivity with Microsoft AI Copilot and unlock your full potential.
Frequently Asked Questions (FAQs)
Q: How to install and access Microsoft AI Copilot?
A: To install Microsoft AI Copilot, visit the official Microsoft website or the Microsoft Store. Download the AI Copilot plugin or extension and follow the installation instructions provided by Microsoft. Launch the application or software after installation to access AI Copilot features.
Q: Does Microsoft AI Copilot support multiple programming languages?
A: Yes, Microsoft AI Copilot supports multiple programming languages. It provides code completion and generation features for popular programming languages and development environments.
Q: Can AI Copilot be customized to suit individual workflows?
A: Yes, AI Copilot offers customization options. Users can adjust the level of suggestions, enable/disable specific features, and personalize the tool’s behaviour to align with their unique workflows.
Q: Is Microsoft AI Copilot compatible with popular productivity software?
A: Yes, Microsoft AI Copilot is designed to integrate seamlessly with popular productivity software and development environments, enhancing their functionality and productivity.
Q: How does Microsoft prioritize privacy and data security with AI Copilot?
A: Microsoft prioritizes user privacy and data security. AI Copilot operates within strict security measures, ensuring that sensitive information remains confidential and secure.
| 0.7597
|
FineWeb
|
Normatec Boot Attachment
The Normatec Boot Attachments feature five overlapping zones for gapless compression. Composed of premium materials, this leg sleeve is compatible with the Normatec 3, Pulse and Pulse Pro 2.0 air compression devices (sold separately). Three size options; standard for 5’4” to 6’3” individuals, tall for 6’4” and over, and short for 5’3” and under individual.
When connected to the Normatec 3 device or one of the Pulse devices, The Normatec Boot Attachment inflates and squeezes sore muscles in the legs and feet to increase circulation, enhance blood flow and reduce soreness. The full-length leg sleeves feature five overlapping zones for effective and gapless air compression technology. The custom foot design applies compression to the bottoms of the feet without uncomfortably squeezing the toes. The three lengths, Short, Standard and Tall are sold both individually and in pairs.
- Overlapping Zones: Five overlapping zone sections which allow for a custom and gapless compression.
- Compatibility: Compatible with the Normatec 3, Pulse 2.0 and Pulse Pro 2.0 series devices.
- High-Quality: Composed of premium grade materials, these attachments are built to last and can be easily wiped down clean.
Why You Need It:
The Normatec Boot Attachment provides muscle relief in the quads, hamstrings and foot muscles. Once attached to a Normatec device, the air compression helps to reduce pain from sports, fitness or every day activities. The leg sleeve can be used to enhance blood flow, reduce soreness and improve athlete performance!
How It Helps:
Attach the NormaTec Boot Attachment to your Pulse 2.0 or Pulse Pro 2.0 device to control the time, pressure and zone settings of your leg and foot muscle recovery. These boot attachments include premium grade locking zippers and a specialized foot design to apply compression to the bottom of your feet without squeezing your toes uncomfortably.
What You Can Do With It:
The Boot Attachment fills with air, zone by zone, to create a lower body muscle massage. With five overlapping zones, and premium grade locking zippers, this high-quality sleeve allows for a gapless compression. Power control device not included.
Standard, Short, Tall, Standard Pair, Short Pair, Tall Pair
| 0.8586
|
FineWeb
|
Sujatha Muralidharan is an immunologist with an interest in studying mechanisms of immune suppression (tolerance) induced in hosts in response to stresses such as bacterial infection or alcohol consumption. The goal of her research is to identify key molecules that play a regulatory role in immune tolerance so these could be targeted for development of novel and effective immunotherapies. She is currently a post-doctoral researcher in the lab of Dr. Linden Hu in the Microbiology department of Tufts University. Her research focuses on inflammatory responses to Lyme disease bacteria Borrelia burgdorferi in innate immune cells. She received her PhD in Immunology from Baylor College of Medicine where she studied the role of Wnt signaling in peripheral T cell activation and maturation. Outside of lab, she enjoys reading and watching science-fiction movies.
| 0.5867
|
FineWeb
|
Some fun games for adults include Who Am I and Mail Call. Another fun game is an orange race that uses oranges and pantyhose. Adults can play the orange race as a tournament, with players racing in elimination rounds, ending with a championship round.Continue Reading
To play Who Am I, assign each person a name and attach it to his forehead or back. Each person can then ask 20 "yes or no" questions to figure out who he is.
To play Mail Call, the group creates a closed circle with one person in the center. People can stand in the circle or use chairs. The person in the center then makes a statement, such as "mail call for everyone who is wearing blue." At that point, everyone in the circle who is wearing blue must switch places in the circle with another person who is wearing blue. Players are not allowed to move to the spot directly next to them. The object of the game is for the person in the middle to find a spot in the circle before someone else does. The next person in the middle continues the game with more "mail calls," choosing identifying factors to make players move.
To play the orange relay race game, each racer must have two oranges and a pair of pantyhose. To set up, the racers place one orange on the floor and the other in a leg of the pantyhose, and tie the other leg around their waists. The leg with the orange should hang to the ground, and swing between the racer's legs. The racers then swing the hanging orange to push the second orange across the floor to the finish line. The first person across the finish line wins.Learn more about Group Games
| 0.6081
|
FineWeb
|
This tool converts genome coordinates and annotation files between assemblies. The input data can be entered into the text box or uploaded as a file. For files over 500Mb, use the command-line tool described in our LiftOver documentation. If a pair of assemblies cannot be selected from the pull-down menus, a sequential lift may still be possible (e.g., mm9 to mm10 to mm39). If your desired conversion is still not available, please contact us.
| 0.7202
|
FineWeb
|
ANALYSIS OF THE RELATIONSHIP BETWEEN EMPATHY AND FAMILY FUNCTIONING IN DENTISTRY STUDENTS OF THE LATIN AMERICAN UNIVERSITY OF SCIENCE AND TECHNOLOGY (ULACIT), SAN JOSE, COSTA RICA
1 Paniamor (COSTA RICA)
2 ULACIT (COSTA RICA)
3 Universidad Santo Tomás, Concepción, Chile. (CHILE)
4 Hospital Félix Bulnes, Departamento de Psiquiatría Infantil y del Adolescente (CHILE)
5 Universidad San Sebastián, Facultad de Odontología (CHILE)
About this paper:
Conference name: 9th International Conference on Education and New Learning Technologies
Dates: 3-5 July, 2017
Location: Barcelona, Spain
Abstract:Empathy is a fundamental attribute for health science professionals, which has both affective and cognitive components, as well as a complex family influence during its development. This work seeks to establish the relationship between empathy and family functioning, as well as the possible relation with gender, in active students of Dentistry of the Latin American University of Science and Technology (ULACIT). A previous study in this same institution showed that gender is an influential factor in the levels of empathy, favoring women (Sánchez et al, 2013), but by increasing practical and community-related experiences, those differences between men and women decreased (Utsman et al, 2017). This paper analyzes the effect of family functioning on empathy levels using two instruments: the Family Functional Questionnaire (FACES) and the Jefferson Empathy Scale (JSE-S). A total of 159 dental students of ULACIT (Costa Rica), active in 2016, equivalent to 53.7% of the population participated. The statistical analysis used ANOVA, Kolmogorov-Smirnov test and Levene test of equality of variances. The analysis of bifactorial variance, model III, shows that for general empathy there are no significant differences (p> 0.05), but if the dimensions are analyzed individually, “Compassion with Care” was superior in females (effect size of 0.032, test power 0.614). The other dimensions did not show gender differences. Regarding family function, the scale considered three different styles: balanced, intermedium, and extreme; which Interestingly, most of the participants belonged to the third style. Family functionality has been described as responsible for generating sensitization and understanding behaviors towards the patient (Madera et al, 2015). Extreme families may be chaotically attached, chaotically detached, rigidly attached or rigidly detached, and these conditions could generate a strong need of fidelity and loyalty (Olson et al, 1983). The students with extreme styles of family functioning showed higher levels of empathy, which can be explained by the development of a personality structure and dynamics that incorporates both resilience and comprehension and acceptance of the differences between people. A person can overcome adversity, and learn to communicate effectively and recognize values and conditions of others. Another explanation for the result could be associated with the possible cultural bias in the measurement of family functioning that the FACES scale offers. Although there is literature reporting positive results of the adaptation of the scale to Spanish and its corresponding application in Latin American contexts (Costa et al, 2013), it is possible that the scale does not consider the particularities of the Costa Rican families.
The results of this study indicated that there is a relationship between the type of family functioning (accordingly with the actual use of the scale) and empathy, were extreme families have higher values of empathy. The development of this communication skill is key for a health science professional, so the recognition of the influence of the student’s background is important to design a learning experience to develop empathy.
Keywords: Empathy, Family Function.
| 0.9313
|
FineWeb
|
This job posting is no longer active.
Some call it a career, for us it’s a calling.
National Jewish Health is currently seeking Clinical Laboratory Scientists to join our motivated and fast-paced COVID testing team. Positions are temporary and will have flexible schedules on days, evenings, and weekends with 10 hour shifts. Base rate of pay is $26.00/hour plus potential for additional shift differential based on schedule.
What you’ll do:
- Perform high complexity tests that are authorized by the laboratory supervisor, manager or director and reviews testing performance as applicable.
- Follow the laboratory and NJH established policies and procedure manuals.
- Participate and maintains records that demonstrate that proficiency testing samples are tested in the same manner as patient specimens.
- Adhere to and understands the quality control policies of the laboratory documenting all quality control activities, instrument and procedural calibrations and maintenance performed.
- Document all corrective actions taken when test systems deviate from the laboratory’s established performance specifications.
- Follow GxP (e.g., GLP, GCLP, GCP, etc.) standards as defined by different national and international organizations (e.g., ISO, FDA, OECD, etc.) when appropriate for clinical or preclinical trials.
- Performs competencies (including age-specific competencies and/or non-human species) as identified through the departmental competency program.
- Monitors and reports on stocks of supplies and equipment, as directed. Makes reagents as necessary.
- Performs error correction, photocopy and data entry and compilation as required.
- Follow set guidelines to troubleshoot/correct assay problems or instrument malfunctions. Perform maintenance and works with supervisor/manager in troubleshooting QC or instrument problems.
- Follow specific biosafety standards for the laboratory and protocols for handling potentially infectious material.
What you’ll need:
- Bachelor’s degree in Biology, Chemistry or a related scientific field.
- 1 year of related laboratory work experience preferred.
As the leading respiratory hospital in the nation, National Jewish Health is pioneering a new era of preventive and personalized medicine. By combining our efforts in comprehensive care, academic education and ground-breaking research, we're able to develop treatments that help our patients live more productive lives. If you believe in Breathing Science is Life, we invite you to join our team.
| 0.6887
|
FineWeb
|
What is Character Education?
When we think about our students and wonder how we can better prepare them to be good, valuable citizens in the future, the idea of character education comes to my mind. Of course, we want our students to be proficient in math and reading, but we also want them to be proficient in being a productive and beneficial member of society. What better way to do that than introducing character education in the classroom! Character education is the act of instilling the values of kindness, generosity, and integrity in students. It consists of teaching the key components of moral excellence through one’s actions.
What is moral excellence? Moral excellence is centered on one thing, and that is doing the right thing. It includes having integrity or doing what is right when no one is looking. It is showing care for others or having empathy when our friends are going through a hard time. Moral excellence is demonstrating kindness to those around you. It is being responsible and taking ownership of one’s actions.
As we enter the holiday season, we can find several ways to easily integrate character education into the classroom. The holidays are an excellent time to teach students the value of kindness, charity, empathy, and putting the needs of others above their own. Below are some ways to help develop those invaluable characteristics in your students during the most wonderful time of the year!
Character Education Activities for the Holidays
Organize a Food, Toy, or Clothing Drive
The holidays present a lot of fun, but they also present a lot of needs. There are always needs within every community, but it is especially important to reach out to those less fortunate during the holidays. Many are without family or lack the means necessary to attain items on their own due to financial circumstances or other personal situations. Students can organize food, toy, and/or clothing drives to help families continue to celebrate the holidays despite those unfortunate circumstances.
Any drive of this nature requires community involvement and a large amount of responsibility from students in order to be successful. Students must learn to communicate with those in their communities to get the word out and better help those in need. Students learn to be responsible for collected materials and understand their importance.
Fundraisers for an Important Cause
During the holidays, students can raise money for important causes either locally or nationally. For instance, students may be encouraged to raise funds for cancer research, a local homeless shelter, or animal shelter. As a class, students can learn about the intended recipient of the funds before beginning the fundraising process. In doing so, students gain a better understanding of why it is important to raise money for their chosen organization.
This understanding also helps to create a bigger desire in students to make a difference, too! Since students will be collecting money, students will learn to do the right thing even when no one is looking. They must collect money and show integrity to ensure that the money goes to its intended recipient only.
One way to extend this idea within your classroom is to research two or three different organizations. Then, students can vote on which organization they would like to raise money for and why.
Embracing Charity and Giving
In continuing with the idea of drives and fundraisers, another excellent activity for character education is to embrace charity and giving. The central ideas of the holidays that echo all throughout the season are thankfulness and giving. Charity is the act of giving to others in need. Charity helps to develop empathy in students. In school, students could place themselves in another person’s shoes. For example, students could volunteer in the cafeteria or help clean the school building in order to better grasp all that cafeteria workers and custodians do on a daily basis.
Outside of school, students could imagine what it must be like to be homeless or without basic needs and decide to do something about it. This may inspire them to volunteer at a local soup kitchen or shelter. Regardless of the location, acts of charity teach students to be sensitive to those around them, and they also remind students to be thankful for all they have.
Random Acts of Kindness
This is probably my favorite way to instill the values of character in students! It is fun and rewarding. It’s simple. Ask students to participate in random acts of kindness. These “acts” can be performed anonymously or not, but they are sure to put a smile on someone’s face.
There are several ways to give acts of kindness while in school. Students could write thank you notes to public service workers, be directed to help a friend when they are having a bad day, clean up a mess that’s not their own, share words of encouragement with one another, or even make gifts for school staff members. Students can even spread kindness outside of school by delivering treats to local businesses, buying someone else’s meal, picking up trash, or surprising a neighbor with a meal.
Clearly, providing others with an act of kindness can be as simple or complex as you desire. The main idea is to teach students to be kind to others and realize how it makes them feel in the process!
Creatively Encourage Others
One of the best aspects of the holiday season is how joyful it is! Students can spread cheer to others in a large number of ways, and in the process, they reinforce the need to care about others and their feelings. Students could go caroling, make holiday cards to share within the school or local nursing home, decorate holiday scenes to share with those in the hospital, etc. All of these activities are both fun and exciting for students, but when they realize the activity serves an additional purpose of providing joy to someone else, it makes it even more rewarding and enjoyable.
| 0.9529
|
FineWeb
|
To achieve this prestigious award a Venturer Scout must be able to set a goal; plan progress towards that goal; organise their self and others; and maintain the determination to overcome difficulties and complete the task.
They must also have achieved the Venturing Skills Award and complete the requirements in four award areas:
- Adventurous Activities – demonstrates that the Venturer Scout is challenged in initiative, expeditions an outdoor adventures.
- Community Involvement – activities centred on citizenship, community service and caring for the environment.
- Leadership Development – involvement in Unit management and leadership courses and studying different vocations.
- Personal Growth – self development through expressions, ideals, mental pursuits and personal lifestyle.
Each year only a few Venturer Scouts achieve this prestigious award, which is presented by the Governor and Chief Scout of New South Wales, as a representative of the Queen, at Government House.
| 0.6429
|
FineWeb
|
Molteni, D., Vitanza, E., & Battaglia, O. R. (2016). Smoothed Particles Hydrodynamics numerical simulations of droplets walking on viscous vibrating fluid. arXiv preprint arXiv:1601.05017.
“We study the phenomenon of the “walking droplet”, by means of numerical fluid dynamics simulations using a standard version of the Smoothed Particle Hydrodynamics method. The phenomenon occurs when a millimetric drop is released on the surface of an oil of the same composition contained in a container subjected to vertical oscillations of frequency and amplitude close to the Faraday instability threshold. At appropriate values of the parameters of the system under study, the liquid drop jumps permanently on the surface of the vibrating fluid forming a localized wave-particle system, reminding the behavior of a wave particle quantum system as suggested by de Broglie. In the simulations, the drop and the wave travel at nearly constant speed, as observed in experiments. In our study we made relevant simplifying assumptions, however we observe that the wave-drop coupling is easily obtained. This fact suggests that the phenomenon may occur in many contexts and opens the possibility to study the phenomenon in an extremely wide range of physical configurations.”
| 0.8548
|
FineWeb
|
Dynamics of Infected Snails and Mated Schistosoma Worms within the Human Host
G. Besigye-Bafaki and L. S. Luboobi
DOI : 10.3844/jmssp.2005.146.152
Journal of Mathematics and Statistics
Volume 1, Issue 2
Male and female worms are independently distributed within a human host each with a Poisson probability distribution mass function. Mating takes place immediately when partners are available. It was found that the mated worm function is non-linear near the origin and becomes almost linear as the worms increase. They increase with increase in the worm load due to aggregation of worms. This also increases the infection of snails which are secondary hosts. On the analysis of the model, three equilibrium states were found, two of which were stable and one unstable. A stable endemic equilibrium within a community is very much undesirable. So the main objective of the model was to have the point O(0,0) as the only equilibrium point. This is a situation where there are no worms within the human host and the environment is free of infected snails. A critical point, above which the disease would be chronic and below which the disease would be eradicated, was found and analyzed. The parameters indicated that to achieve a disease free environment, the death rate of worms within the human host should be much greater than the cercariae that penetrate the human. Also the death rate of infected snails should be much higher than the contact rate between the miracidia and the snails. It was concluded that de-worming and killing of snails should be emphasized for disease control and educating the masses on the modes of disease transmission is quite necessary for prevention of the disease.
© 2005 G. Besigye-Bafaki and L. S. Luboobi. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
| 0.8087
|
FineWeb
|
Does he call you just to hear your voice,
tell you he's glad he made that choice
to keep you as his one and only,
so even when you're alone you don't feel lonely?
Does he hold you close just because he can,
making you glad that he's your man?
Give kisses in random places,
Just to see your random faces?
Does he ever cater to your needs,
breakfast in bed, and it's you he feeds?
Roses just to see you smile,
sweet nothings every once in a while?
Does he rub you down after a long day,
take you out to eat, willing to pay?
First bite off his plate is yours,
not just the entree, but every course?
Does he fill his phone with pics of you,
proudly proclaiming, "Yeah, that's my Boo!"?
Candid shots of candid times,
the first 3 of his "Fave 5"?
Does he treat you the way you'd like?
Because if not, Daddy will do you right.
| 0.9924
|
FineWeb
|
The mining technique of mountain top removal, and subsequent valley filling, a practice employed in the Appalachian Coal Belt Region of eastern Kentucky, is detrimental to headwater stream systems. The watershed values (i.e. water storage, carbon sequestration, nutrient cycling, habitat, etc.) provided by headwater stream systems are essentially lost once the valley is filled. The development of practical stream restoration and creation techniques for post-mined lands is needed to regain lost headwater stream system value. Important to note is that these techniques must be 1) all encompassing of the valuable functions of headwater stream systems and 2) economically feasible for the mining companies to implement for both currently constructed fills and for future fills.
Fortunately, an opportunity to develop head-of-hollow fill stream restoration techniques is present at the University of Kentucky's Robinson Forest. Robinson Forest is an approximately 15,000-acre teaching, research and extension forest administered by the Department of Forestry at the University of Kentucky. Located in the rugged eastern portion of the Cumberland Plateau and largely isolated from human activities, Robinson Forest is unique in its diversity. During the 1990s, a section of Robinson Forest, including the proposed restoration site at Guy Cove, was mined for coal. As part of the mining process, a valley fill was created in Guy Cove, which impacted the headwater stream system in that valley. While there was significant environmental loss, a unique research and demonstration opportunity was created. Currently, the University of Kentucky has received funding from the Kentucky Department of Fish and Wildlife Resources’ In-Lieu-Free Program to conduct a restoration project at Guy Cove.
The objectives of the Guy Cove Restoration Project are to:
- Recreate headwater stream functions in an economically feasible manner.
- Attenuate runoff events to reduce peak discharges and increase base flows.
- Promote surface expression of water and enhance wetland treatment efficiency to improve water quality.
- Improve habitat through the development of vernal ponds and a hardwood forest.
- Establish an outdoor classroom for demonstrating design principles, construction techniques, and measurement of system performance.
- Educate a myriad of stakeholders including consulting and mining engineers, land reclamation design professionals, the regulatory community, environmental advocacy groups, and students.
The major components of the design included:
- Modifications to the head-of-hollow fill geometry,
- Compaction of the crown to control infiltration,
- Creation of a channel, with a clay underliner, across the crown of the fill,
- Use of loose dumped spoil to promote tree growth,
- Development and/or enhancement of a variety of ephemeral channels utilizing different materials such as rock from the head-of-hollow fill, rock from natural channels, and woody debris,
- Creation of vernal ponds for energy dissipation and habitat enhancement, and
- Implementation of a treatment system along with modifications to an existing wetland to improve water quality.
| 0.9856
|
FineWeb
|
To identify, analyze, and prioritize business continuity requirements is crucial to initiate the business continuity management (BCM) program. Which of the following should be conducted first?
A. Determining the scope of the BCM program
B. Understanding the organization and its context
C. Understanding the needs and expectations of stakeholders
D. Develop project plans
Kindly be reminded that the suggested answer is for your reference only. It doesn’t matter whether you have the right or wrong answer. What really matters is your reasoning process and justifications.
My suggested answer is B. Understanding the organization and its context.
Stakeholders are identified after the context is determined and analyzed. Their needs and expectations are solicited, collected, analyzed, and managed as requirements, and become the basis of the scope. Alternatives are then proposed to meet stakeholders’ requirements. A business case evaluates the alternatives, selects one as the solution, and supports a program or project to be sponsored and initiated.
That said, a program or project is initiated with a charter supported by a business case that evaluates alternatives and determines the solution to meet stakeholders’ needs and expectations identified from the organization and its context, typically through internal and external analysis or environment scanning.
The scope of the BCM program is approved, baselined, and documented in the program plan after the program is initiated.
PMI OPM and Project Management
A BLUEPRINT FOR YOUR SUCCESS IN CISSP
My new book, The Effective CISSP: Security and Risk Management, helps CISSP aspirants build a solid conceptual security model. It is not only a tutorial for information security but also a study guide for the CISSP exam and an informative reference for security professionals.
| 0.7429
|
FineWeb
|
Every year during the fall, winter and early spring, we restrict visitation to our Children’s Hospital and our Neonatal Intensive Care Unit. The reason: to protect our patients from viruses like RSV (respiratory syncytial virus). You may have never heard of RSV, but there is a very good chance that you HAVE had it. For most people, RSV acts just like a common cold, but for the very young or immunocompromised, RSV can cause serious problems and may even require mechanical ventilation.
Hospitals are places to get well, and it is our job to try to prevent additional illness while patients are in our care. Just a little cold or sniffle for you or an otherwise healthy sibling, can turn into a very bad illness for a young, hospitalized child.
This is why during the respiratory viral season we ask that:
- Visitors be 12 years of age or older to enter our Children’s Hospital and NICU
- You always wash your hands with soap and water or use alcohol hand gel upon entering and leaving a child’s room
- You refrain from visiting a child or infant in the hospital if you have fever, a cough or a runny nose
Things you may not know about RSV:
- It often presents like the common cold in otherwise healthy (older) children and adults
- Premature infants and very young children are at greater risk of getting a serious cases of RSV
- People infected with RSV are contagious for 3 to 8 days
- There are shots high-risk babies can get to help prevent RSV, but is not a vaccine
- Once you have RSV, doctors cannot cure the disease they can only treat the symptoms
- RSV spreads rapidly among young children
- If a case of RSV is serious enough in a young child, it can even continue to cause respiratory issues as the child ages.
Now that you know RSV, help us protect young patients from getting the virus and the potentially serious complications.
| 0.9499
|
FineWeb
|
License for MariaDB 10.4.27 Release Notes
This page is licensed under both of the following two licenses:
- The Creative Commons Attribution/ShareAlike 3.0 Unported license (CC-BY-SA).
- The Gnu FDL license (GFDL or FDL).
Please seek proper legal advice if you are in any doubt about what you are and are not allowed to do with material released under these licenses.
| 0.7219
|
FineWeb
|
added in the last 7 days
A freely available database for major league professional hockey. Covers the following leagues: NHA, NHL, PCHA, WCHL (known as the WHL in its final year), and WHA.
- Jan 3, 2007
- This is a public group.
- Attachments are permitted.
- Members cannot hide email address.
- Listed in Yahoo Groups directory.
- Membership does not require approval.
- Messages require approval.
- All members can post messages.
| 0.8502
|
FineWeb
|
The future of solar panel efficiency is expected to continue to improve as research and development in the field progress. Current research is focused on increasing the efficiency of solar cells, developing new materials for use in solar panels, and finding ways to reduce the cost of manufacturing solar panels. Some experts predict that solar panel efficiency could reach as high as 50% in the future, which would be a significant increase from current levels. Additionally, the use of concentrated solar power (CSP) and hybrid solar panels (bifacial, tracking, etc) is expected to become more common in the future, further increasing the overall efficiency of solar power systems.
Latest Research In Solar Energy
There are many ongoing research efforts in the field of solar energy, with new developments and discoveries being made regularly. Some of the latest research in solar energy includes:
- Perovskite solar cells:
Perovskite solar cells are a newer type of solar cell that has the potential to be more efficient and less expensive than traditional silicon solar cells.
- Dye-sensitized solar cells:
Dye-sensitized solar cells use a dye to absorb sunlight and convert it into electricity. They are less efficient than traditional solar cells but are less expensive to produce.
- Organic solar cells:
Organic solar cells are made from organic materials and they are flexible, lightweight, and can be produced at a lower cost than traditional solar cells.
- Tandem solar cells:
Tandem solar cells use multiple layers of solar cells to increase efficiency. They have the potential to convert more than 30% of the sunlight into electricity, which is significantly higher than traditional silicon solar cells.
- Hybrid solar panels:
Hybrid solar panels are a combination of different types of solar cells, such as silicon and perovskite cells, to increase the overall efficiency of the panel.
- Concentrated Solar Power (CSP):
Concentrated Solar Power (CSP) systems use mirrors to focus sunlight onto a receiver, which converts the heat into electricity. CSP systems are less efficient than traditional solar cells but have the potential to generate electricity during times when the sun is not shining.
These are some of the current research that is taking place in the solar energy field. The knowledge on the subject is constantly evolving and new developments may have happened after the cut-off date of the model.
Solar Energy Solutions For Residential Homes
There are several solutions for using solar energy in residential homes, including:
- Solar panels:
The most common and well-known solution for residential solar energy is the installation of solar panels on the roof of a home. These panels convert sunlight into electricity, which can be used to power the home or sent back to the grid for a credit on the homeowner's utility bill.
- Solar water heaters:
These systems use solar energy to heat water for household use, such as for showers and laundry. They can be used in combination with traditional water heating systems for added efficiency.
- Solar battery storage:
As the cost of batteries continues to decrease, more homeowners are installing battery storage systems to store the electricity generated by their solar panels for use during non-sunlight hours.
- Solar Attic Fans:
Solar attic fans use solar energy to ventilate the attic and reduce heat build-up in the home. This can reduce the load on air conditioning systems and lower energy costs.
- Solar pool heating:
Solar pool heating systems use solar energy to heat swimming pools. This can extend the swimming season and reduce the need for electricity or gas to heat the pool.
- Hybrid solar systems:
Hybrid solar systems are a combination of different types of solar energy solutions, such as solar panels and a backup generator. This provides a reliable source of electricity even when sunlight is not available.
Each of these solutions has its own set of benefits and drawbacks, and the best option for a particular home will depend on the homeowner's specific energy needs, budget, and location.
Future Prospects Of Solar-Powered Cities
The future prospects of solar-powered cities are very promising as more and more cities around the world are turning to solar energy as a way to reduce their dependence on fossil fuels and decrease their carbon footprint.
- Increased solar panel installations:
In the future, it is likely that we will see more and more solar panel installations in cities, both in residential homes and commercial buildings. This will help to increase the overall amount of electricity generated by solar energy in cities.
- Development of smart cities:
Smart cities are urban areas that use technology to improve the quality of life for residents and reduce their environmental impact. In a smart solar-powered city, the energy demand and supply will be monitored, and distributed in a more efficient way.
Microgrids are local energy systems that can function independently from the traditional power grid. They are becoming more common in cities as a way to increase energy security and reduce dependence on fossil fuels. In a solar-powered city, the microgrid would be powered primarily by solar energy.
- Electric vehicles:
Electric vehicles are becoming more popular in cities, and as the number of electric vehicles on the road increases, the demand for solar-generated electricity will also increase.
- Building integrated photovoltaics (BIPV):
Building integrated photovoltaics (BIPV) is a type of solar panel that is integrated into the building, rather than being added on as an afterthought. BIPV has the potential to greatly increase the amount of solar energy generated in cities.
- Concentrated solar power (CSP):
CSP is a technology that uses mirrors to reflect and concentrate sunlight onto a receiver, which converts the heat into electricity. This technology is more appropriate for large-scale power generation and can be a great solution for solar-powered cities.
However, it's worth noting that creating a solar-powered city requires a significant investment in infrastructure and technology, as well as a change in the mindset of the citizens. The implementation of these solutions and the level of success varies from one city to another depending on the factors such as government policies, investment, and public awareness.
Solar Energy Market Growth And Trends
The solar energy market has experienced significant growth in recent years and is expected to continue to grow in the future. Some of the key trends and drivers of this growth include:
- Declining costs:
The cost of solar energy has been decreasing in recent years due to advances in technology and economies of scale. As the cost of solar energy continues to decrease, it is becoming more competitive with other forms of energy, making it a more attractive option for both residential and commercial customers.
- Government policies:
Government policies, such as tax incentives and renewable energy mandates, have played a significant role in driving the growth of the solar energy market. These policies have helped to create a more favorable environment for solar energy development and deployment.
- Increasing demand:
As concerns about climate change and energy security continue to grow, the demand for solar energy is also increasing. This is especially true in developing countries, where the need for access to electricity is increasing as the population grows.
- Innovations in technology:
Research and development in solar energy technology have led to new developments, such as perovskite solar cells, which have the potential to be more efficient and less expensive than traditional silicon solar cells.
- Battery storage:
The decrease in battery storage costs has made it more viable to store the electricity generated by solar panels, this allows for more efficient use of solar energy and increases the overall capacity factor of the solar installation.
- Grid Parity:
Many countries are reaching grid parity, meaning that the cost of solar energy is becoming comparable to the cost of electricity from the grid. This is making solar energy a more attractive option for many customers.
- Utility-scale solar:
Utility-scale solar projects are becoming increasingly popular, as they are able to generate large amounts of electricity at a lower cost than smaller, distributed projects.
The solar energy market is expected to continue to grow in the future as more countries adopt policies to promote renewable energy and as the cost of solar energy continues to decrease. However, the growth of the market can vary depending on the factors such as government policies and regulations, economic conditions, and technological advancements.
Advancements In Solar Energy Systems For Urban Areas
There have been several advancements in solar energy systems for urban areas in recent years, including:
- Building Integrated Photovoltaics (BIPV):
BIPV is a type of solar panel that is integrated into the building, rather than being added on as an afterthought. This type of system can increase the amount of solar energy generated in urban areas, as it allows for more surface area to be used for solar panel installations.
- Urban rooftop solar:
Rooftop solar panels have become a popular choice for urban areas, as they make use of the limited space available in buildings. Advances in technology have made it possible to install solar panels on a variety of roof types, including flat roofs and metal roofs.
- Solar canopy and shading systems:
Solar canopies and shading systems are a great solution for urban areas as they provide shade for pedestrians and vehicles while also generating electricity. These systems can be used in parking lots, bus stops, and other outdoor spaces.
- Solar-powered street lights:
Many cities are replacing traditional street lights with solar-powered street lights. This is a cost-effective solution as it eliminates the need for trenching and underground wiring, and reduces energy consumption.
- Solar walls:
Solar walls are a type of solar panel that is installed on the walls of buildings, rather than on the roof. They can be used to generate electricity and also provide shading and insulation.
- Floating solar:
Floating solar systems are installed on bodies of water, such as lakes, reservoirs or canals. These systems can be a great solution for urban areas as they make use of otherwise unused space and can also help to reduce water evaporation.
- Community solar:
Community solar projects allow multiple customers to share a single solar installation. This can be a great solution for urban areas, as it allows residents who may not have the ability to install solar panels on their own property to still benefit from solar energy.
These are some of the advancements in solar energy systems that are being used in urban areas. As the technology continues to evolve, new solutions may be developed and implemented in the future, to make solar energy more accessible, efficient, and cost-effective for urban areas.
The future of solar energy is very promising as the technology continues to improve, costs continue to decrease, and demand for clean energy increases. Advances in solar cell technology, such as perovskite solar cells, have the potential to increase efficiency and decrease costs. Additionally, the integration of other technologies like battery storage, microgrids, and smart cities can further improve the overall performance and reliability of solar energy systems. The government's policies and incentives, along with the growth of the electric vehicles market and building integrated photovoltaics (BIPV) will also play a crucial role in the growth of the solar energy market. Furthermore, the development of solar-powered cities, floating solar systems, and community solar projects are some of the advancements that can make solar energy more accessible, efficient, and cost-effective for urban areas. However, the implementation of these solutions and the level of success varies from one place to another depending on the factors such as government policies, investment, and public awareness.
| 0.9931
|
FineWeb
|
The stunning and highly controversial find made by marine treasure hunters using side-scanning sonar to detect shipwrecks in the Baltic Sea has finally been identified as a submerged monumental construction from the Paleolithic era. The giant circular seafloor promontory measuring ~60m in diameter is actually a terraced monument built by the highly advanced Atlantean civilization over 14,000 years ago.
Co-discoverer and Ocean X team leader Dennis Aasberg describes just a few of the geometric features presented by the gargantuan disc-shaped temple rising above the sea floor, likening it to concrete:
Prohibitive conditions severely limit filming of the ancient monumental structure, especially rough seas and the very poor visibility of <1m near the bottom. animated digital terrain models allow a clearer perspective of massive proportions and complex geometric configuration submerged atlantean monument (above). greatest hindrance to seafloor site investigation is an intense electromagnetic vortex that perpetually interferes with all types electrical equipment situated on or above ancient –in the vertical water column, onboard ships at the sea surface and even affecting low-flying airplanes.
| 0.6698
|
FineWeb
|
Ellen earned a BA in Engineering Science and BE in Biomedical Engineering from Dartmouth College in 2014. She is a PhD student in Dr. George Truskey's Lab.
- Email Address: [email protected]
Investigating the Effects of Oxidative Stress on the Circulatory System Using TEBVs
The vascular system’s response to stress, like oxidation or deformation, mitigates numerous vascular pathologies, atherosclerosis primary among them. A typical blood vessel consists of a layer of endothelial cells, called the endothelium, surrounded by a layer of smooth muscle cells. The endothelium regulates the transport of molecules and fluids into the tissue, while the smooth muscle layer regulates diameter of the blood vessel. Oxidative stress can arise when dysfunctional proteins or immune cells release reactive oxygen into the blood stream or vessel wall. This primarily affects the endothelial cell layer, causing it to adopt a senescent, or aged, phenotype. Endothelial senescence leads to abnormal smooth muscle cell proliferation, reduced vasoreactivity in the presence of chemical regulators, and correlates with higher atherosclerosis risk. The Truskey lab has recently developed tissue-engineered blood vessels (TEBVs), tubular collagen constructs that, when seeded with human endothelial cells and fibroblasts, recreates the three-dimensional structure and properties of an arteriole in vitro. Most platforms for studying the vascular system rely on two-dimensional co-cultures or animal models. TEBVs show greater fidelity to native tissue than two-dimensional systems, and can be tested with the same functional assays used clinically to evaluate vascular health. The central hypothesis of this research is that TEBVs exposed to oxidative stress will have impaired function and increased risk of disease development. The effects of oxidative stress on the vascular system will be explored by characterizing stress-induced changes in (1) vasoreactivity, (2) vascular wound healing, and (3) atherosclerosis risk. Oxidative stress can be modeled in vitro by chronic exposure to hydrogen peroxide. 1: Changes in vasoreactivity will be characterized by evaluating changes in vessel diameter in the presence of vasoconstrictors and vasodilators. qRT-PCR will be used to quantify the changes in endothelial cell gene expression that cause the observed changes in vessel function. 2: To simulate vascular injury, TEBVs will either be exposed to the toxin theophylline or subjected to a scratch injury. Recovery from vascular injury will be evaluated by examining endothelial cell migration into the wound site and recovery of vasoreactivity post-injury. 3: To probe atherosclerosis risk after oxidative stress, TEBVs will be exposed to three atherogenic stimuli: oxidized low-density lipoprotein (oxLDL), activated monocytes, and the soluble protein TNFα.
AW ARDS/HONORS/FELLOW SHIPS:
Dean’s Graduate Research Fellowship 2014-2016
NSF GRFP Honorable Mention 2015
Center for Biomolecular Tissue Engineering (CBTE) Fellow 2015-2017
| 0.8998
|
FineWeb
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 29