## Science and technology

working with nature- civil and hydraulic engineering to aspects of real world problems in water and at the waterfront - within coastal environments

working with nature- civil and hydraulic engineering to aspects of real world problems in water and at the waterfront - within coastal environments

One must have guessed what I intend to discuss in this piece. People are glued to numbers in one way or another – for the sake of brevity let us say, from the data on finance, social demography and income distribution – to the scientific water level, wave height and wind speed. People say there is strength in numbers. This statement is mostly made to indicate the power of majority. But another way to examine the effectiveness of this statement is like this: suppose Sam is defending himself by arguing very strongly in favor of something. If an observer makes a comment like this, well these are all good, but the numbers say otherwise. This single comment has the power to collapse the entire arguments carefully built by Sam (unless Sam is well-prepared and able to provide counter-punch), despite the fact that numerical generalizations are invariably associated with uncertainties. Uncertainty is simply the lack of surety or absolute confidence in something. . . .While the numbers have such powers, one may want to know: - What was the purpose and how were these numbers collected?
- What is the attribution map to these numbers?
- Are there identifiable patterns in these numbers?
- If patterns exist, are they definite?
- Are the different sets of numbers correlated or do they belong to some groups?
- What is the likelihood of favorable outcomes of certain bins of numbers?
- What is the likelihood of favorable outcomes of certain extreme numbers that may not have been sampled yet?
- How certain are these likelihoods?
- Is the knowledge conveyed by numbers adequate for decision making?
The science that answers all these questions on an uncertainty paradigm is known as statistics. This science is about the stochastic (as opposed to deterministic) world – the world driven by the messages conveyed by random (, and the showing no easily identifiable systematic pattern) numberschances of favorable outcomes of those numbers. The former refers to, or is generally known as Statistics – the science of collection, organization, presentation and interpretation of numbers or numerical information. The latter as a sub-division of statistics, stands for Probability – the method of evaluating the likelihood of favorable outcomes of an event or hypothesis if .sampled many timesProbability with its root in logic is commonly known as probability distribution because it shows the distribution of a statistical data set – a listing of all the favorable outcomes, and how frequent they might occur ( as a clarification of two commonly confused terms: probability refers to what is likely to happen – it denotes the surety of a happening but unsurety in the scale of its likelihood; while possibility refers to what might happen but is not certain to – it denotes the unsurety of a happening). Both of these methods aim at turning the information conveyed by numbers or data into knowledge – based on which inferences and decisions can be made. Statisticians rely on tools and methods to figure out the patterns and messages conveyed by numbers that may appear chaotic in ordinary views.The term many times originates from the Theory of Large Numbers. Statisticians say that if a coin is tossed for a short period, for instance 10 times – it may yield let us say, 7 heads (70% outcome) and 3 tails (30% outcome); but if tossed many more times, the outcomes of the two possibilities, head and tail is likely to be 50% each – the outcomes one logically expects to see. Following the proof of this observation by Swiss mathematician Jacob Bernoulli (1655 – 1705), the name of the theory was formally coined in 1837 by French mathematician Simeon Denis Poisson (1781 – 1840).There is a third aspect of statistics – it is known as the Statistical Mechanics (different from ordinary mechanics that deals with one single state) that is mostly used by physicists. Among others, the system deals with equilibrium and non-equilibrium processes, and Ergodicity (the hypothesis that the average over long time of a single state is same as the average of a statistical ensemble – an ensemble is the collection of various independent states).. . .A few lines on the random and systematic processes. They can be discussed from the view points of both philosophical and technical angles. Randomness or lack of it, is all about perception – irrespective of what the numbers say, one may perceive certain numbers as random while others may see them differently. In technical terms, let me try to explain through a simple example. By building upon the piece on the NATURE page, one can say that flow turbulent randomness appears when measurements tend to approximate to near-instantaneous sampling. Let us say, if one goes to the same spot again to measure turbulence under similar conditions; it is likely that the measurements would show different numbers. If the measurements are repeated again and again, a systematic pattern would likely emerge that could be traced to different causes – but the randomness and associated uncertainties of individual measurements would not disappear.TurbulenceSomething more on the randomness. The famous Uncertainty Principle proposed by German theoretical physicist Werner Karl Heisenberg (1901 – 1976) in 1926 changed the way science looks at Nature. It broke the powerful deterministic paradigm of Newtonian (Isaac Newton, 1642 – 1727) physics. The principle says that there can be no certainty in the predictability of a real-world phenomenon. Apart from laying the foundation of Quantum Mechanics, this principle challenges all to have a close look at everything they study, model and predict. Among others, writing this piece is inspired after reading the books: A Brief History of Time (Bantam Books 1995) by British theoretical physicist Stephen Hawking (1942 - 2018); Struck by Lightning – the Curious World of Probabilities by JS Rosenthal (Harper Collins 2005); the 2016 National Academies Press volume: Attribution of Extreme Weather Events in the Context of Climate Change; and the Probability Theory – the Logic of Science by ET Jaynes (Cambridge University Press 2003). A different but relevant aspect of this topic – was posted earlier on this page indicating how decision making processes depend on shouldering the risks associated with statistical uncertainties.Uncertainty and RiskOn some earlier pieces on the NATURE and SCIENCE & TECHNOLOGY pages, I have described two basic types of models – the behavioral and the process-based mathematical models – the deterministic tools that help one to analyze and predict diverse fluid dynamics processes. Statistical processes yield the third type of models – the stochastic or probabilistic models – these tools basically invite one to see what the numbers say to understand the processes and predict things on an uncertainty paradigm. While the first two types of models are based on central-fitting to obtain mean relations for certain parameters, the third type looks beyond the central-fitting to indicate the probability of other occurrences. . . .Before moving further, a distinction has to be made. What we have discussed so far is commonly known as the classical or Frequentist Statistics (given that all outcomes are equally likely, it is the number of favorable outcomes of an event divided by the total outcomes). Another approach known as the Bayesian Statistics was proposed by Thomas Bayes (1701 – 1761) – developed further and refined by French mathematician Pierre-Simon Laplace (1749 – 1827). Essentially, this approach is based on the general probability principles of association and conditionality.Bayesian statisticians assume and use a known or expected probability distribution to overcome, for instance, the difficulties associated with the problems of small sampling durations. It is like infusing an intuition ( prior information or knowledge) into the science of presently sampled numbers. [If one thinks about it, the system is nothing new – we do it all the time in non-statistical opinions and judgments.] While the system can be advantageous and allows great flexibility, it also has room for manipulation in influencing or factoring frequentist statistical information (that comes with confidence qualifiers) in one way or another.. . .Perhaps a little bit of history is desirable. Dating back from ancient times, the concept of statistics existed in all different cultures as a means of administering subjects and armed forces, and for tax collection. The term however appeared in the 18th century Europe as a systematic collection of demographic and economic data for better management of state affairs. It took more than a century for scientists to formally accept the method. The reason for such a long gap is that scientists were somewhat skeptical about the reliability of scattered information conveyed by random numbers. They were more keen on robust and deterministic aspects of repeatability and replicability of experiments and methods that are integral to empirical science.Additionally, scientists were not used to trust numbers that did not accompany the fundamental processes causing them. Therefore, it is often argued that statistics is not an exact science. Without going into the details on such arguments, it can be safely said that many branches of science including physics and mathematics (built upon theories, and systematic uncertainties associated with assumptions and approximations) also do not pass the exactitude (if one still believes this term) of science. In any case as scientists joined, statistical methods received a big boost in sophistication, application and expansion ( from simple descriptive statistics to many more advanced aspects that are continually being refined and expanded). Today statistics represents a major discipline in Natural and social sciences; and many decision processes and inferences are unthinkable without the messages conveyed or the knowledge generated by the science of numbers and chances. However, statistically generalized numbers do not necessarily tell the whole story, for instance when it comes down to human and social management – because human mind and personality cannot simply be treated by a rigid number. Moreover, unlike the methods scientists and engineers apply, for instance, to assess the consequences and risks of Natural Hazards on vulnerable infrastructure – statistics-based social decisions and policies are often biased toward favoring the mean quantities or majorities at the cost of sacrificing the interests of vulnerable sections of the social fabric. When one reads the report generated by statisticians at the 2013 Statistical Sciences Workshop ( Statistics and Science – a Report of London Workshop on the Future of Statistical Sciences) participated by several international statistical societies, one realizes the enormity of this discipline encompassing all branches of Natural and social sciences. Engineering and applied science are greatly enriched by this science of numbers and chances. . . .In many applied science and engineering practices, a different problem occurs – that is how to attribute and estimate the function parameters for fitting a distribution in order to extrapolate the observed frequency ( tail ends of the long-term sample frequencies, to be more specific) to predict the probability of an extreme event (which may not have occurred yet). The applied techniques for such fittings to a distribution (ends up being different shapes of exponential asymptotes) of measurements are known as the extremal probability distribution methods.They generally fall into a group known as the Generalized Extreme Value (GEV) distribution – and depending on the values of location, scale and shape parameters, they are referred to as Type I (or Gumbel distribution, German mathematician Emil Julius Gumbel, 1891 – 1966), Type II (or Fisher-Tippett distribution, British statisticians Ronald Aylmer Fisher, 1890 – 1962 and Leonard Henry Caleb Tippett, 1902 – 1985) and Type III (or Weibull distribution, Swedish engineer Ernst Hjalmar Waloddi Weibull, 1887 – 1979).This in itself is a lengthy topic – hope to come back to it at some other time. For now, I have included an image I worked on, showing the probability of exceedence of water levels measured at Prince Rupert in British Columbia. From this image, one can read for example, that a water level of 3.5 m CD ( Chart Datum refers to bathymetric vertical datum) will be exceeded for 60% of time (or that water levels will be higher than this value for 60% of time, and lower for 40%). In extreme probability distribution it is common practice to refer to an event in recurrence intervals or return periods. This interval in years says that an event of a certain return period has the annual probability – reciprocal of that period (given that the sampling refers to annual maxima or minima). For example, in a given year, a 100-year event has 1-in-100 chance (or 1%) of occurring.Another distinction in statistical variables is very important – this is the difference between continuous and discrete random variables. Let me try to briefly clarify it by citing some examples. The continuous random variable is like water level – this parameter changes and has many probabilities or chances of occurring from 0 ( exceptionally unlikely) to 1 (virtually certain). In many cases, this type of variables can be described by Gaussian (German mathematician Carl Freidrich Gauss, 1777 – 1855) or Normal Distribution. The discrete random variable is like episodic earthquake or tsunami events – which are sparse and do not follow the rules of continuity, and can best be described by Poisson Distribution. . . .When one assembles huge amounts data, there are some first few steps one can do to understand them. Many of these are described in one way or another in different text books – I am tempted to provide a brief highlight here. - A first impression on the quality of data can always be made by deciphering the documentations on the collection platforms, instrumentation, methodology, etc.
- Next, it is always helpful to make a plot of the data in order to make a preliminary assessment of scatter or cluster, errors, outliers, shift (
*could result from datum change, change in the instrumentation configuration, etc.*) periodicity, and trend if any. These first two steps are very important for screening and validating the collected information. - The next important step is to do some descriptive statistics – the central tendency (
*the location parameter – mean, median, and mode*), their spread around the center (*the scale parameter – the standard deviation, STD*) and the shape parameter (*symmetry/asymmetry, peak*). The simplest and the most prevalent distribution yielding these parameters is the symmetric Normal Distribution – typical of continuous random variables. In this distribution, the larger the standard deviation the larger is the data scatter – with 68%, 95% and 99.8% of data lying within ±1 STD, ±2 STD and ±3 STD of the mean, respectively. In coastal waters, sea-state waves follow a skewed or asymmetric distribution – the**Rayleigh Distribution**(Lord Rayleigh, 1842 -1919*)*. Whatever distribution is applicable, the data are usually presented in PDF (*probability density function*), CPD (*cumulative probability distribution*) or EPD (*exceedence probability distribution*).**The first represents the occurrence probability; the last two are comparative probabilities.** - Most continuous random variables represent a superimposed multiplicity of information belonging to different frequencies, amplitudes and phases. Application of the
**Fourier Transform**(Jean-Baptiste Joseph Fourier, 1768 – 1830) routines is helpful to decompose the data into different components. This is useful for finding out the dominating frequencies, the periodicity or trend in the data. - Depending on the purpose, one can then choose to remove some frequencies from the data by applying an appropriate filter. One such is the
**Low-Pass Filter (LPF)**that passes information of frequencies lower than a cut-off frequency, and attenuates the higher frequencies. The other is the**High-Pass Filter (HPF)**– passing frequencies higher than a cut-off frequency and attenuating the lower frequencies. Moving average techniques are also applied for similar purposes. - Looking into the stationarity of data is very important.
**Stationarity simply means that no upward or downward trend is evident in the data when moving averages are applied.**Stationarity or non-stationarity has serious implications in future projections (*such as GEV distributions – which are based on the assumption that data are stationary*), for example of climate change and consequences. Let me cite one simple example. The water level that one measures at a tidal station has many components – the highest period being the Nodal Period of 18.6 years – this period is caused by the oscillating inclination of the Moon’s orbit relative to the plane of the Earth’s Equator. This factor induces a subtle and small change in the time-series water level data – therefore**Mean Sea Level**is determined by averaging over this period. It is understandable that any prediction of climate change induced Sea Level Rise trend must take account of such factors to eliminate the undesirable frequencies. - There are some other analyses – statisticians use to make sense of large amount of multi-variate data, applied especially in climate, social and biological sciences. One of them is known as
**Cluster Analysis**– referring to grouping or clustering, such that data in each group have more similar attributes or characteristics than the ungrouped data. The**Principal Component Analysis (****PCA) –**is one such technique**–**it is a tool for identifying the patterns in data, and expressing the data in a way to highlight their similarities and differences. Among others, COVARIANCE evaluations are made to determine the strength of data correlation (*if COV is 0, +ve or -ve, the variables are uncorrelated, positively correlated, and negatively correlated, respectively*).
. . .Before finishing I like to illustrate a case of conditional probability, applied to specify the joint distribution of wave height and period. These two wave properties are statistically inclusive and dependent; and coastal scientists and engineers usually present them in joint frequency tables. As an example, the joint frequency of the wave data collected by the Halibut Bank Buoy in British Columbia shows that 0.25-0.5 m; 7-8 s waves occur for 0.15% of the time. As for conditional occurrence of these two parameters, analysis would show that the probability of 7-8 s waves is likely 0.52% given the occurrence of 0.25-0.5 m waves; and that of 0.25-0.5 m waves is likely 15.2% given the occurrence of 7-8 s waves.Here is a piece of caution stated by a 19th century British statesman, Benjamin Disraeli (1804 – 1881): There are three kinds of lies: lies, damned lies, and statistics. Apart from bootstrapping, lies are ploys designed to take advantages by deliberately manipulating and distorting facts. The statistics of Natural sciences are less likely to qualify for lies – although they may be marred with uncertainties resulting from human error, data collection techniques and methods (for example, the data collected in the historic past were crude and sparse, therefore more uncertain than those collected in modern times).Data of various disciplines of social sciences, on the other hand are highly fluid in terms of sampling focus, size, duration and methods, in data-weighing, or in the processes of statistical analyses and inferences. Perhaps that is the reason why the statistical assessments of the same socio-political-economic phenomena by two different countries hardly agree, despite the fact that national statistical bodies are supposedly independent of any influence or bias. Perhaps such an impression of statistics was one more compelling reason for statistical societies to lay down professional ethics guidelines (e.g. International Statistics Institute; American Statistical Society).. . . . .- by Dr. Dilip K. Barua, 19 January 2018
0 Comments
I like to begin this piece with a line from Socrates (469 – 399 BCE) who said: I am the wisest man alive, for I know one thing, and that is that I know nothing. This is a philosophical statement developed out of deep realization – neither practical nor useful in the mundane hustle-bustle of daily lives and economic processes. Philosophers tend to see the world differently sometimes beyond ordinary comprehension – but something a society looks upon to move forward in the right direction. Scientists and engineers – for that matter any investigator, who explores deep into something, comes across this type of feeling nonetheless – the feeling that there appear more questions than definitive answers. This implies that our scientific knowledge is only perfect to the extent of a workable explanation or solution supported by assumptions and approximations – but in reality suffers from transience embedded with uncertainties. This piece is nothing about it – but an interesting aspect of actions and reactions between waves and structures. . . . These interaction processes cause vortices around the structure scouring the seabed and undermining its stability. Let me share all these aspects in a nutshell. As done in other pieces – I will provide some numbers to give an idea what we are talking about.One of the keys to understanding these processes – for that matter any dynamic equilibrium of fluid flow – is to envision the principle of the conservation of energy – that the incident wave energy must be balanced by the structural responses – the processes of dissipation, reflection and transmission. The materials covered in this piece are based on my experience in different projects; and on the materials described in: the Random Seas and Design of Maritime Structures (Y. Goda 2000); the 2002 Technical Report on Wave Run-up and Wave Overtopping at Dikes (The Netherlands); the 2006 USACE Coastal Engineering Manual (EM 1110-2-1100 Part VI); the 2007 Eurotop Wave Overtopping of Sea Defences and Related Structures (EUROCODE Institutes); the 2007 Rock Manual of EUROCODE, CIRIA (Construction Industry Research and Information Association) and CUR (Civil Engineering Research and Codes, the Netherlands); and others.. . .Most of the findings and formulations in wave-structure interactions and scour are empirical – which in this context means that they were derived from experimental and physical scale modeling tests and observations in controlled laboratory conditions – relying on a technique known as the dimensional analysis of variables. Although they capture the first order processes correctly, in real world problems the formulation coefficients may require judgmental interpretations of some sort to reflect the actual field conditions. Some materials relevant for this piece were covered earlier in the NATURE and SCIENCE & TECHNOLOGY pages. This topic can be very elaborate – and to manage it to a reasonable length, I will limit it to discussing some selected aspects of: - reflection (
*a portion of**the**waves reflecting back into the incident waves by the structure*); - runup (
*the vertical height of the wave up-rush along a sloping structure or beach after breaking*); - transmission over a fixed breakwater (
*a**portion of the waves propagating across a submerged structure, or**the overtaken runup spilling over the structure crest into the back area*), or through a floating breakwater (*a**portion of the waves passing through underneath a floating structure*); - overtopping (
*the runup reaching and overtaking the crest of the structure*); - and scour (
*the obstruction created by the structure creating nearfield vortices to scour erodible seabed within the immediate vicinity of the structure*).
bending of the incident waves around a structure) and scattering (waves reflecting from the structure in multiple directions fragmenting the incident wave) are not covered in this piece. Hope to come back to them at some other time.. . .What must one look for to describe the wave-structure interactions? Perhaps the first is to realize that only the waves with lengths (L) less than about 5 times the structures dimension (D) are poised to cause wave-structure interactions (see the on this page) in the presence of slender structures. Wave Forces on Slender StructuresThe second is that the wave energy must remain in balance – which translates to the fact that sum of squares of the wave heights (H^2) in dissipation, reflection and transmission must add to the incident wave height squared. This balancing is usually presented in terms of coefficients (the ratios of dissipated, reflected and transmitted wave heights to the incident wave height), squares of which (C^2) must add to one. The third is the Surf Similarity Number (SSN, discussed in on this page) – this parameter appears in every relation where a sloping structure is involved – it is directly proportional to wave period and slope. The fourth is the direction of wave forcing relative to the loading face of the structure – its importance can simply be understood from the differences in interactions between the head-on and oblique waves. Importance of other structural parameters will surface as we move on to discussing the processes. The Surf Zone. . .. Wave reflection can be a real problem for harbors lined with vertical-face seawalls. It can cause unwanted oscillation and disturbance in vessel maneuvering and motion, and in the scouring of protective structure foundations. As one can expect, wave reflection from a vertical-face smooth structure is higher than and different from a non-overtopped sloping structure. Wave ReflectionA non-breaking head-on wave on a smooth vertical-face structure is likely to reflect back straight into the incident waves – an example of perfect reflection. When waves are incident at an angle on such a structure, the direction of the reflected waves follows the principle of optical geometry. For sloping structures, the reflection is directly proportional to SSN, and it can be better grasped from a relation proposed by Postma (1989). Let us see it through an example. A 1-m high wave with periods of 6-s and 15-s, propagating head-on, on a non-overtopped 2 to 1 straight sloping stone breakwater ( with a smooth surface and an impermeable core) would produce reflected waves in the order of 0.36 m and 0.83 m, respectively. When the slope is very rough built by quarried rock, most of the incident energies are likely to be absorbed by the structure. It is relevant to point out that according to Goda (2000), natural sandy beaches reflect back some 0.25% to 4% of incident wave energies. . . .. Wave runup is an interesting phenomenon – we see it each time we are on the beach – not to speak of the huge runups that occur during a tsunami overwhelming us in awe and shock. Wave RunupThe runup is a way for waves to dissipate its excess energy after breaking. Different relations proposed in literature show its dependence on wave height and period, the angle of wave incidence, beach or structure slope – and on the geometry, porosity and roughness. A careful rearrangement of different proposed equations would indicate that the runup as directly and linearly proportional to wave period and slope, but somewhat weakly to wave height. This is the reason why the runup of a swell is higher than a lower-period sea – or why a flatter slope is likely to have less runup than a steeper one – or why the tsunami runup is so huge. An estimate following the USACE-EM would show that the maximum runups on a 10 to 1 foreshore beach slope are 1.9 m and 3.8 m for a 1-m high wave with periods of 6-s and 15-s, respectively. The explanation of the runup behavior is that the longer the wave period, the less is its loss of energy during breaking, affording the runup process to carry more residual energy. Although a runup depends on its parent oscillatory waves for energy, its hydrodynamics is translatory, dominated by the laws of free surface flows – this means in simple terms – the physics of the steady Bernoulli (Daniel Bernoulli; 1700 – 1782) equation.. . .. How does the wave transmissions over and through a maritime structural obstacle work? Such structural obstacles are called breakwaters because they block or attenuate wave effects in order to protect the areas behind them. There are two basic types – the fixed or rigid, and the floating breakwaters.Wave Transmission The former is usually built as a thin-walled sheet pile, a caisson or as a sloped rubble-mound (built by quarried rocks or other manufactured shapes). The second, moored to seabed or fixed to vertical mono-piles, is usually built by floats with or without keels. The floating breakwaters are only effective in a coastal environment of relatively calm short period waves (the threshold maximum ≈ 4 s) – because long period waves tend to transmit easily with negligible loss of energy. The attenuation capacity of such breakwaters is often enhanced by a catamaran type system by joining two floats together. First let us have a glimpse of wave transmissions over a submerged and overtopped fixed breakwater. To illustrate this over a low-crested ( structure crest height near the still water level – somewhat higher and lower) rubble-mound breakwater, an image is included showing the transmission coefficients (ratio of transmitted to incident wave height) for a head-on 1-m high 6-s wave, incident on the 2 to 1 slope of a breakwater with a crest width of 1 m. This is based on a relation proposed by researches at Delft (d’Angremond and others 1996) and shows that with other factors remaining constant, the transmission coefficient is linearly but inversely proportional to the freeboard. For both the permeable and impermeable cores, the image shows the high transmission by submergence (often termed as green overtopping) and low transmission by overtopping – with the permeable core affording more transmissions than a non-permeable one. Emergent and submergent with the changing tide level, such transmissions are directly proportional to wave height, period and stoss-side breakwater slope, but inversely proportional to the breakwater crest width. Wave transmission concept over a submerged breakwater is used to design artificial reefs to attenuate wave effects on an eroding beach. In another application, the reef layout and configuration are positioned and designed in such a way that wave focusing is stimulated. The focused waves are then led to shoal to a high steepness suitable for surfing – giving the name artificial surfing reef. The transmissions through a floating breakwater are complicated – more for loosely moored than a rigidly moored. As a rule of thumb, an effective floating breakwater requires a width more than half the wave length, or a draft at least half the water depth. To give an idea, an estimate would show that for a 1-m high, 4-s head-on incident wave on a single float with a width of 2 m and a draft of 2 m – rigidly moored at 5 m water-depth – the transmitted wave height immediately behind would be 0.55 m. If the draft is increased to 2.5 m by providing a keel, the transmitted wave heights would reduce to 0.45 m. The estimates are based on Weigel (1960), Muir Wood and Fleming (1981), Kriebel and Bollmann (1996) and Cox and others (1998).. . .. Wave overtopping is a serious problem for the waterfront seawalls installed for protecting urban and recreational areas from high storm waves. Wave OvertoppingIt disrupts normal activities and traffic flow, damages infrastructure and causes erosion. Among the various researches conducted on this topic, the Owen (1982) works at the HR Wallingford shows some insights into the overtopping discharge rates.Overtopping discharge rate is directly proportional to the incident wave height and period, but inversely to the freeboard (the height of the structure crest above still water level). To give an idea: for a 1 m high freeboard, 2 to 1 sloped structure – a 1-m high wave with periods of 6-s and 15-s would have an overtopping discharge of 0.10 and 0.82 m^2/s per meter width of the crest, respectively. If the freeboard is lowered to 0.5 m, the same waves will cause overtopping of 0.26 and 1.25 m^2/s/m. . . .Scouring of erodible seabed in the vicinity and around a structure results from the obstruction of a structure to fluid flows. Scour. The obstructed energy finds its way in downward vertical motions – in the nearfield vigorous actions and vortices scooping out sediments from the seabed. Scouring processes are fundamentally different from the erosion processes – the latter is a farfield phenomenon and occurs due to shearing actions. The closest analogies of the two processes are like this: the vortex scouring action is like a wind tornado – while the process of erosion is like the ground-parallel wind speed picking up and blowing sand. Most coastal scours occur in front of a vertical sea wall, near the toe of a sloping structure, at the head of a breakwater, around a pile, and underneath a seabed pipe. They are usually characterized by the maximum depth of scour ( an important parameter indicating the undermining extent of scouring), and the maximum peripheral extent of scouring action. It turns out that these two scouring dimensions scale with wave height, wave period, water depth and structural diameter or width. These parameters are lumped into a dimensionless number known as the Keulegan-Carpenter (KC) Number (a ratio of the product of wave period and nearbed wave orbital velocity to the structure dimension), proposed by GH Keulegan and LH Carpentar (1958). This number was introduced in the piece on this page.Wave Forces on Slender StructuresExperimental investigations by Sumer and Fredsøe (1998) indicate that a scour hole around a vertical pile develops only when KC > 6. At this value, wave drag starts to influence the structure adding to the inertial force, and scouring action continues to increase exponentially as the KC and structure diameter increase. Scour prevention is mostly implemented by providing stone ripraps – of suitable size, gradation and filter layering. . . .The Koan of this piece:If you do not respect others – how can you expect the same from them.. . . . .- by Dr. Dilip K. Barua, 20 October 2017 This topic represents one of the most interesting problems for port terminal installations – or in a broader sense for station keeping or tethering of floating bodies such as a ship ( vessel) or a floating offshore structure. The professionals like Naval Architects and some Maritime Hydraulic Civil Engineers are trained to handle this problem. I had the opportunity to work on some projects that required the static force equilibrium analysis for low frequency horizontal motions, and the dynamic motion analysis accounting for the first order motions in all degrees of freedom. They were accomplished through modeling efforts to configure terminal and mooring layouts, and to estimate restraining forces on mooring lines and fenders for developing their specifications. Let me share some elements of this interesting topic in simple terms. To manage this piece to a reasonable length - some other aspects of ship mooring such as its impacts on structures during berthing are not covered. Hope to get back to this and other aspects at some other time. This piece can appear highly technical, so I ask the general readers to bear with me as we go through it. It is primarily based on materials described by American Petroleum Institute (API), Oil Companies International Marine Forum (OCIMF), The World Association for Waterborne Transport Infrastructure (PIANC), British Standard (BS), the pioneering works of JH Vugts (TU Delft 1970) and JN Newman (MIT press 1977), and others. Imagine an un-tethered rigid body floating in water agitated by current, wave and wind. These three environmental parameters will try to impose some motions on the body – the magnitudes of which will depend on the strength and frequency of the forcing parameters – as well as on the inertia of the body resisting the motion and on the strength of restoring forces or stiffness.. . .Before moving into discussing the motions further, a few words on current, wave and wind are necessary. Some of these environmental characteristics were covered in different pieces posted on the NATURE and SCIENCE & TECHNOLOGY pages. Among these three, currents caused by long-waves such as tide, are assumed steady because their time-scales are much longer than the ship motions. Both wave and wind, on the other hand are unsteady – and spectromatic in frequency and direction – causing motions in high (short period) to low (long period) frequency categories. In terms of actions, the ship areas below the water line are exposed to current and wave actions – for wind action it is the areas above the water line. Often the individual environmental forcing on the ship’s beam (normal to the ship’s long axis) proves to dominate the directional loading scenario. But an advanced analysis of the three parameters is required in order to characterize them as the combined actions from the perspectives of operational and tolerance limits – and for design loads acting on different loading faces of the ship. The acceptable motion limits vary among ships accounting for shipboard cargo handling equipments and safe working conditions.Some simple briefs on the oscillation dynamics. When a rigid body elastic system is forced to displace from its equilibrium position, it oscillates in an attempt to balance the forces of excitation and restoration. The simplest examples are: the case of vertical displacement when a mass is hanged by a spring, and the case of angular displacement of a body fixed to a pivot. When the forcing excitation is stopped after the initial input, an elastic body oscillates freely in exponentially diminishing amplitude and frequency. A forced oscillation occurs when the forcing continues to excite the system – in such cases resonance could occur.. . .The natural frequency ( It turns out that the natural period is directly proportional to its size or its displaced water mass. This means that a larger body has a longer natural period of oscillation than a smaller one. Understanding the natural period is very important because if the excitation coincides with the natural period, resonance occurs causing unmanageable amplification of forces. or reciprocally the period) of a system is its property and depends on its inertial resistance to motion, and on its strength to restoring it to equilibrium. The best way to visualize it is to let the body float freely in undamped oscillations.In reality however, resonance rarely occurs because most systems are damped to some extent. Damping reduces the oscillation amplitude of a body by absorbing the imparted energy – partially (under-damped), or more than necessary (over-damped), or just enough to cause critical damping. Most floating systems are under-damped. Force analysis of an over-damped body requires an approach different from the motion analysis. . . . Here are some relevant terms describing a ship. A ship is described by its center-line length at the waterline (L), beam or width B ( the midship width at the waterline), the draft D (the height between the waterline and ship’s keel), the underkeel clearance (the gap between the ship’s keel and the seabed), the fully loaded Displacement Tonnage (DT) – accounting for displacement of the ship with DWT (Dead Weight Tonnage – the displaced mass of water at the vessel’s carrying capacity – accounting for cargo, fuel, water, crew [including passengers if any] and food item storage), and the Lightweight (empty) Weight Tonnage (LWT). A vessel is known as a ship when its tonnage DWT is 500 or more. The ship dimensions are related to one another in some fashion allowing to making estimates of others if one is known. A term known as the block coefficient (CB) represents the fullness of the ship – it is the ratio of the ship’s actual displaced volume and its prismatic or block volume (with L, B and D). The typical CBs are 0.85 for a tanker and 0.6 for a ferry. To give an idea, the new Panamax vessel (maximum allowed through the new Panama Canal Lock) is L = 366 m; B = 49 m, and D = 15 m. Different classification societies like ABS (American Bureau of Shipping) and LR (Lloyd’s Register of Shipping) set the technical standards for the construction and operation of ships and offshore floating structures. . . .What are some of the basic rigid body motion characteristics? The floating body motions occur in six degrees of freedom representing linear translational and rotational movements. Literature describes the six motion types in different ways; perhaps a description relying on the vessel’s axes is a better way of visualizing them. All the three axes – the horizontal x-axis along the length, the horizontal y-axis across the width, and the vertical z-axis originate at the center of gravity (cg) of the vessel. To illustrate them I have included a generic image (credit: anon) showing the motions referring to:- the x-axis: the translational
*surge*, and the rotational*roll*; - the y-axis: the translational
*sway*, and the rotational*pitch*; - the z-axis: the translational
*heave*, and the rotational*yaw*.
roll and pitch rotations are inclinational (listing/heeling of the vessel – tilting from the vertical upright position). Because L is some 3.5 to 10 times B, the moment of inertia resisting the pitch is much higher than the roll. When a vessel is restrained at berth, the motions cause forces on the mooring lines, as well as on any other obstacles such as the fender (fender and fender mounting provide a stand-off distance between the ship and the berthing face) that resist the motion.. . .Another important understanding needs to be cleared. This is about the stability or equilibrium of the vessel for inclinational motions. A floating body is stable, when its centers of gravity ( cg) and buoyancy (cb) lie on the same vertical plane. When this configuration is disturbed by environmental exciting forces like wind and wave, or by mechanical processes like imbalanced loading and unloading operations, the vessel becomes unstable shifting the positions of cg and cb. The imbalanced loading can only be restored to equilibrium by the vessel operators by re-arranging the cargo. Among others, the vessel operators also have the critical responsibility to leave the berth during a storm, to tending the mooring lines – so that all the mooring lines share the imposed loads – and in keeping the ship within the berthing limit. For non-inclinational motions like surge, sway, heave and yaw the coincidental positions of the cg and cb are not disturbed. They just translate back and forth in surge, and near and far in sway oscillations. In heave and yaw, the coincidental positions of cg and cb do not translate – for heave it is the vertical up and down motion, and for yaw it is the angular translatory motion about the vertical axis. . . .How to describe these motions and the corresponding forces in mathematical terms? The description as an equation is conceived in analogous with the equilibrium principle of a spring-mass system – where an oscillating exciting force causes acceleration, velocity and translation of the floating body. For each of the six degrees of freedom, an equation can be formed totaling six equations of motion. With all the six degrees of freedom active, the problem becomes very formidable to solve analytically; the only option then is to resort to numerical modeling technique. Motion analyses are conducted by two different approaches – the Frequency Domain analysis focuses on motions in different frequencies, adding them together as a linear superposition. The Time Domain analysis on the other hand focuses on motions caused by the time-series of the exciting parameters. Perhaps some more clarifications of the terms – the mass including the added mass, the damping and the stiffness – are helpful. The total mass (kg) of a floating body comprises of the mass of water displaced by it, and the mass of the surrounding water called the added mass – which is proportional to the size of the body, and also depends on different motion types. These two masses resist the acceleration of the body. The Damping (N.s/m; Newton = measure of force; s = time, second; m = distance, meter) is a measure of the absorption of the imparted energy by the floating system – having the effect of exponentially reducing its oscillation. Damping is three basic types: damping by viscous action, wave drift motion and mooring system restraining.The stiffness (N/m) is the force required of an elastic body to restore it to the equilibrium position. The terms discussed above refer to rectilinear motions of surge, sway and heave. How about the terms for angular motions of roll, pitch and yaw? The total mass for angular motions becomes the mass moment of inertia, and the stiffness is replaced by the righting moment (moment is the product of force and its distance from the center).In the cases of environmental and passing vessel excitations, gravity and mooring restraints try to restore the stability of the vessel. The roles of these restoring elements are like this: - the mass (vessel displacement and added mass) is mostly active in the vertical motions of
*heave*,*roll*and*pitch*. This means that the larger the vessel, the larger is its stiffness or resistance to such motions; - the mooring line stiffness is mostly active in the horizontal motions of
*surge*,*sway*and*yaw*, and marginally in*roll*motion; - the fender is mostly active in
*sway*, and also marginally in*yaw*.
. . .A few words on the passing vessel effect. A speeding vessel passing past a moored vessel causes surge and sway loads and yaw moment on the latter. The magnitudes of them depend on the speed of the passing vessel, the distance between the two, and the underkeel clearance of the moored vessel. As a simple explanation involving the ships in a parallel setting: a moored vessel starts to feel the effect when the passing vessel appears at about twice the length of the former, and the effect (push-pull in changing magnitude and phase) continues until cleared of this passing away distance. Analysis shows that sway pull-out is the highest when the passing vessel is at the midship of the moored vessel – but the surge and the yaw are the lowest at this position.. . .Well so far so good. On some aspects of mooring now. Mooring or station keeping comprises of two basic types – fleet and fixed moorings. The former refers to systems that use primarily the tension members such as ropes and wires, and is mostly applied in designated port offshore anchorage areas. Ports have designated outer single-point anchorage areas where ships can wait for the availability of port berthing, and/or for loading from a feeder vessel and unloading to a lighterage vessel. The area is also used to remain on anchor or on engine-power during a storm. Ships can cast anchors in those areas or moor to an anchorage buoy. For a single point mooring on anchor a large mooring circle is needed to prevent the anchored ship from colliding with the neighbouring vessels. Assuming very negligible drifting of the anchor, the radius of this influence-circle depends on water depth, anchor-chain catenary and ship’s overall length.Anchoring to a moored buoy by a hawser reduces this radius of influence-circle of the moored ship. Buoy facilities are usually placed offshore for mooring of tankers, and such buoys are equipped with multiple cables and hoses to cater to the logistical needs of the vessel as well as for loading and unloading petroleum. A vessel moored at a single point is free to swing or weather-vane following the prevailing weather and current to align itself bow-on. The weather vaning is advantageous because it minimizes the vessel area exposed to wind, wave and current loads.Fixed Mooring refers to a system that uses both tension ( ropes and wires) and compression members (energy absorbing fenders). A different type of fixed mooring, mostly implemented in rather calm environmental settings of current and wave is implemented in marinas. The floats of marinas are usually anchored via collars to vertical mono-piles. Only a single-degree of freedom is provided in this system – which means that the floats move rather freely vertically up and down with changing water levels. Typical fixed moorings include tying the ship at piers ( port structures extending into the water from the shore), wharves (port structures on the shore), and dolphins (isolated fixed structures in water) together with loading platforms. The latter is mostly placed in deepwater by configuring the alignment such that moored ships will largely be able to avoid beam seas, currents and wind. The tying is implemented by wires and ropes – some are led from the ship winches through fair leads to the tying facilities like bollards, bitts or cleats on the berthing structures. Wires and ropes are specified in terms of diameter, material, type of weaving and the minimum breaking load (MBL). The safe working load (SWL) is usually taken as a fraction of MBL - some 0.5 MBL or lower. The mooring lines are spread out ( symmetrically about the midship) at certain horizontal and vertical angles. Typically, the spring lines (closed to midship mostly resisting the longitudinal motions) spread out at an angle no more 10 degrees from the x-axis; the breasting lines (between spring and bow/stern lines, mostly resisting the lateral motions) spread out at an angle no more than 15 degrees from the y-axis; and the bow and stern lines are usually laid out at 45 degrees. The maximum vertical line angles are kept within 25 degrees from the horizontal. The key considerations for laying out mooring lines are to keep spring lines as parallel as possible; and breast lines as perpendicular as possible to the ship’s long axis. When large ships are moored with wires, a synthetic tail is attached to them to provide enough elasticity to the vessel motions.. . .This piece ended up longer than I anticipated. Let me finish it by quoting Nikola Tesla (1856 – 1943) – the famous inventor, engineer and physicist: If you want to know the secrets of the universe, think in terms of energy, frequency and vibration. . . . . .- by Dr. Dilip K. Barua, 15 September 2017Most of the materials in this piece are based on my 2008 ISOPE ( International Society of Offshore and Polar Engineers) paper: . This paper discusses, for both monochromatic and spectral waves, how the Morison forces (Wave Loads on Piles – Spectral Versus Monochromatic ApproachMorison and others, 1950) compare for a surface piercing round vertical pile in some cases of waves with Ursell Numbers (Fritz Joseph Ursell, 1923 – 2012), U in the order of 5.0. I have included an image (courtesy ISOPE) from this paper showing inertial and drag force RAOs as a function of frequency. The RAO or Response Amplitude Operator represents the maximum force for a unit wave height.. . .Standing in the nearshore water of unbroken waves, one experiences a shoreward push when a wave crest passes and a seaward pull when a wave trough passes – and if the waves happen to be large he or she may experience dislodging from the foothold. The immediate instinct is to recognize the power of a wave in exerting forces on members standing on its way – resisting its motion. How to estimate these forces? Do structures of all different sizes experience the same type of forces?The answer to the first question depends on how well one answers the second question – how the structure sizes up with the wave – the wave length ( L) to be exact. It turns out that the nature of wave forces on a structure can be distinguished based on the value of a parameter known as the diffraction parameter – a ratio of the structure dimension (D) perpendicular to the direction of wave advance, and the local wave length, L. When D is less than about 1/5th of L, the structure can be treated as slender and wave forces can be determined by the Morison equation. . . .In this piece let us attempt to see how the Morison forces work – how the forces apply in considerations of both monochromatic and spectral waves. I will also touch upon the nonlinear wave forces. Slender structures exist in many port and offshore installations as vertical structures – as mono pile and pile-supported wharfs in ports – as gravity platforms and jacket structures in offshore structures – and as horizontal structural members and pipelines. What are the Morison forces? They are the forces caused by the wave water particle kinematics – the velocity and acceleration. The two kinematics causing in-line drag and inertial horizontal forces are hyperbolically distributed over the height of a vertical standing structure – decreasing from the surface to the bottom. For a horizontal pipeline, the loads include both the in-line horizontal forces as well as the hydrodynamic vertical lift force. More about these to-and-fro wave forces? The drag force is due to the difference in the velocity heads between the stoss and lee sides of the structure; and the inertial force is due to its resistance to water particle acceleration. The hydrodynamic lift force is due to the difference in flow velocities between the top and bottom of a horizontal structure. I will attempt to talk more about it at some other time. Do the slender members change the forcing wave character? Well while the structures provide the resistances by taking the forces upon themselves; they are not able to change the character of the wave – because they are too small to do so. From the perspectives of structural configuration, when a vertical member is anchored to the ground but free at the top, it behaves like a cantilever beam subjected to the hyperbolically distributed oscillating horizontal load. When rigid at both ends, the member acts like a fixed beam. For a horizontal pipeline supported by ballasts or other rigidities at certain intervals, it also acts like a fixed beam with the equally distributed horizontal drag and inertial forces and the vertical hydrodynamic lift forces. . . .Before entering into the complications of spectral and nonlinear waves, let us first attempt to clarify our understanding of how linear wave forces work. We have seen in the piece on the NATURE page that the wave water particle orbital velocity is proportional to the wave height Linear WavesH, but inversely proportional to the wave period T. The water particle acceleration is similarly proportional to H, but inversely proportional to T^2. The nature of proportionality immediately tells us that waves of low steepness ( For symmetric or linear waves, the orbital velocity and acceleration are out of phase by 90 degrees.H/L) have lower orbital velocities and accelerations – therefore they are able to cause less forces than the waves of high steepness.In the light of Bernoulli Theorem (Daniel Bernoulli, 1700 – 1782) dealing with the dynamic pressure and velocity head, the drag force is proportional to the velocity squared. Both the drag and inertial forces must be multiplied by some coefficients to account for the structural shape and for viscosity of water motion at and around the object. Many investigators devoted their times to find the appropriate values of drag and inertia coefficients. A book authored by T. Sarpkaya and M. Isaacson published in 1981 has summarized many different aspects of these coefficients.Among others the coefficients depend on the value of Reynolds Number (Osborne Reynolds, 1842 – 1912) – a ratio of the product of orbital velocity and structure dimension to the kinematic viscosity. The dependence of the forces on the Reynolds Number suggests that a thin viscous sublayer develops around the structure – and for this reason the Morison forces are also termed as viscous forces. The higher the value of the Reynolds Number, the lower are the values of the coefficients. The highest drag and inertial coefficients are in the range of 1.2 and 2.5, respectively, but drag coefficients as high as 2.0 have been suggested for tsunami forces.. . .How do the drag and inertial forces compare to each other? Two different dimensionless parameters answer the question. The first is known as Keulegan-Carpenter (G.H. Keulegan and L.H. Carpenter, 1958) Number KC; it is directly proportional to the product of wave orbital velocity and period and inversely proportional to the structure dimension. It turns out that when KC > 25 – drag force dominates, and when KC < 5 inertia force dominates. The other factor is known as the Iversen Modulus (H.W. Iversen and R. Balent, 1951) IM, is a ratio of the maximums of inertia and drag forces. It can be shown that both of these two parameters are related to each other in terms of the force coefficients. While the horizontal Morison force is meant to result from the phase addition of the drag and inertial forces, which are 90 degrees out of phase, the conventional engineering practice ignores this scientific fact, instead adds the maximums of the two together. This practice adds a hidden factor of safety (HFS) in design forces. For example, a 2-meter, 12-second wave acting on a 1-meter vertical structure standing at 20-meter of depth (U = 6.4) would afford a HFS of 1.45. However HFS varies considerably with the changing values of IM – with the highest occurring at IM = 1.0, but decreases to unity at very high and low values of IM. How does the wave nonlinearity affect the Morison forces? We have seen in the piece on the NATURE page that the phase difference between the velocity and acceleration shifts away from 90 degree – with the increasing crest water particle velocity and acceleration. For the sake of simplicity, let us focus on a 1-meter high 8-second wave, propagating from the region of symmetry at 10 meter water depth (Nonlinear WavesU = 5.0) to the region of asymmetry at 5 meter water depth (U = 22.6). By defining and developing a relationship between velocity and acceleration with U, it can be shown the maximum linear and nonlinear forces are nearly equal to each other at U = 5.0. But as the wave enters into the region of With waves becoming more nonlinear in shallower water, the percentage increases manifold higher than the ones estimated by the linear method.U = 22.6, the nonlinear drag force becomes 36% higher than the linear drag force. The nonlinear inertia force becomes 8% higher than the linear one.While the discussed method provides some insights on the behaviors of nonlinear Morison forces, USACE ( US Army Corps of Engineers) CEM (Coastal Engineering Manual) and SPM (Shore Protection Manual) have developed graphical methods to help estimating the nonlinear forces. . . . Now let us turn our attention to the most difficult part of the problem. What happens to the Morison forces in spectral waves? How do they compare with the monochromatic forces? To answer the questions, I will depend on my ISOPE paper. The presented images from this paper shows the inertial and drag force RAOs over the 20-meter water depth at 1-meter interval from the surface to the bottom – for a 2-meter high 12-second wave acting on a 1-meter diameter round surface-piercing vertical pile. The forcing spectral wave is characterized by JONSWAP ( Joint North Sea Wave Project) spectrum (see ).Wave HindcastingFor this case, the RAOs are the highest at frequencies about 3.5 times higher than the peak frequency ( fp) of 0.08 Hertz (or 12-second). This is interesting because the finding is contrary to the general intuition that wave forces are high at the frequency of the peak energy – the period effects on wave kinematics! As the frequency decreases, the inertial force RAO diminishes tending to zero. The drag force RAO, on the other hand, tends to reach a constant magnitude as the frequency decreases. This finding confirms that at low frequency motions of tide and tsunami, the dominating force is the drag force (which is also true for cases when KC > 25.0).How do the spectral wave forces compare with the monochromatic wave forces? It turns out that for the case considered, the monochromatic method underestimates the wave forces by about 9%. A low difference of this order of magnitude is good news because one can avoid the rigors of the spectral method to overcome such a small difference – the difference of this magnitude is in the range of typical uncertainties of many parameters. . . . . .- by Dr. Dilip K. Barua, 17 November 2016 We have talked about the Natural waves in four blogs – , Ocean Waves, Linear Waves and Nonlinear Waves on the NATURE page, and in the Spectral Waves piece on the SCIENCE & TECHNOLOGY page. In this piece let us turn our attention to the most dynamic and perhaps the least understood region of wave processes – the surf zone – the zone where waves dump their energies giving births to something else. Transformation of WavesWhat is the surf zone? The Surf zone is the shoreline region from the seaward limit of initial wave breaking to the shoreward limit of still water level. The extent of this zone shifts continuously – to shoreward during high tide and to seaward during low tide – to shoreward during low waves and to seaward during high waves. The wave breaking leading to the transformation – from the near oscillatory wave motion to the near translatory wave bores – is the fundamental process in this zone. Note that by the time a deep-water spectral wave arrives at the seaward limit of the surf zone, its parent spectrum has already evolved to something different, and the individual waves have become mostly asymmetric or nonlinear. In the process of breaking a wave dumps its energy giving birth to several responses – from the reformation of broken waves, wave setup and runup, and infragravity waves – to the longshore, cross-shore and rip currents – to the sediment transports and morphological changes of alluvial shores. The occurrence, non-occurrence or extents of these responses depends on many factors. It is impossible to treat all these processes in this short piece. Therefore I intend to focus on some fundamentals of the surf zone processes - the processes of wave breaking and energy dissipation.. . .How are the surf zone processes treated in mathematical terms? Two different methods are usually applied to treat the problem. The first is based on the approximation of the Navier-Stokes (French engineer Claude-Louis Navier, 1785 – 1836; and British mathematician George Gabriel Stokes, 1819 – 1903) equation. In one application, it is based on the assumption that the convective acceleration is balanced by the in-body pressure gradient force, wave forcing and lateral mixing, and by surface wind forcing and bottom frictional dissipation.The second approach is based on balancing two lump terms – the incoming wave energy and the dissipated energy. We have seen in the piece on the NATURE page that wave energy density is proportional to the wave height squared. In a similar fashion, the dissipated wave energy is proportional to the breaking wave height squared multiplied by some coefficients. Linear WavesBoth the approaches are highly dependent on the empirical descriptions of these coefficients – making the mathematical treatment of the problem rather weak.. . .Let us focus on the energy approach – how the breaking energy dissipation occurs in the surf zone. Many investigators were involved in formulating this phenomenon. For individual monochromatic waves, the two most well-knowns are: the one proposed by M.J.F. Stive in 1984 and the other one proposed by W.R. Dally, R.G. Dean and R.A. Dalrymple in 1985.It was the energy dissipation formulation by J.A. Battjes and J.P.F.M. Janssen in 1978 that addressed the energy dissipation processes of spectral waves. This formulation required an iterative process, and was therefore cumbersome to apply. However with the modern computing power, that hurdle does not exist any more. In addition to the aforementioned investigators, there are many others who modified and refined the formulations and understandings of the surf zone processes.In this piece let us attempt to see how the energy dissipation works for spectral waves. Among the coefficients influencing the energy dissipation is a factor that defines how fractions of the spectral wave group break. This factor is very important, and let my try to illustrate how that works – by solving the term iteratively. . . . Before doing that let me highlight some other understandings of the surf zone processes. Early investigators of the surf zone processes have noticed some important wave breaking behaviors – that all waves do not break in the same fashion and on all different beach types. Their findings led them to define an important dimensionless parameter – the Surf Similarity Number (SSN) – also known as the Iribarren Number (after the Spanish engineer Ramon Cavanillas Iribarren, 1900 – 1967; C.R. Iribarren and C. Nogales, 1949) – or simply the Wave Breaking Number. This number is the product of beach slope and wave steepness – and is directly proportional to the beach slope and inversely proportional to the square root of wave steepness (steepness is the ratio of wave height and wave length – long-period swells are less steep than short-period seas). Either of these two parameters could define a breaker type. To identify the different breaker types it is necessary to define some threshold SSN values. Among the firsts to define the threshold values were C.J. Galvin (1968) and J.A. Battjes (1974), but more lights were shed by many other investigators later. On the lower side, when the SSN is less than 0.5, the type is termed as Spilling Breaker – it typically occurs on gently sloping shores during breaking of high-steepness waves – and is characterized by breaking waves cascading down shoreward creating foamy water surface. On the upper side, when the SSN is higher than 3.3, the type changes to Surging and Collapsing Breakers. In a surging breaker, waves remain poorly broken while surging up the shore. In a collapsing breaker, the shoreward water surface collapses on the unstable and breaking wave crest. Both of these breakers typically occur in steep shores during periods of incoming low-steepness waves. When the SSN ranges between 0.5 and 3.3, the type becomes a Plunging Breaker – it typically occurs on intermediate shore types and wave steepness – and is characterized by curling of shoreward wave crest plunging and splashing on to the wave base. This type of breaker causes high turbulence, and sediment resuspension and transport on alluvial shores. An example of sediment transport processes and associated uncertainties in the surfzone is in the .Longshort Sand Transport. . .Perhaps it is helpful if we think more for a while, of what happens in the surf zone – as one watches the incoming waves – high and low – breaking at different depths propagating on to the shore. What prompts wave breaking? The shallow water wave-to-wave interaction – Triad lets wave spectrum evolve into a narrow band shifting the peak energy to high frequencies. The concentration of wave energies lets a wave-form to become highly nonlinear and unsustainable as the water particle velocity exceeds the celerity (square root of the product of acceleration due to gravity and depth), and right before breaking it takes the shape of a solitary wave (a wave that does not have a definable trough). I have tried to throw some lights on this breaking process in my short article published in the 2008 ASCE Journal of Waterway, Port, Coastal and Ocean Engineering {Discussion of ‘Maximum Fluid Forces in the Tsunami Runup Zone’}.Further, we have discussed in the piece on this page that a wave cannot sustain itself when it reaches the steepness at the threshold value of 1/7 and higher. A criterion proposed by Transformation of WavesMiche [Le pouvoir reflechissant des ouvrages maritimes exposes a l’action de la houle. 1951] captures the wave breaking thresholds not only due to the limiting steepness, but also due to the limiting water depths in shoaling water. In 1891, J. McCowan has shown that wave breaks when its height reaches 4/5th of the water depth on flat bottom.Now that we know the wave breaking types and initiation, let us try to understand how the energies brought in by the waves are dissipated. To illustrate the process I have included an image showing the percentage of fractional energy dissipation, as two spectral waves – a 2-meter 6-second and a 2-meter 12-second – propagate on to the shore. As demonstrated in the image, spectral waves begin to lose energy long before the final breaking episode happens. This means that the transformation process lets the maximums of the propagating spectrum to break as they reach the breaking threshold. In addition, as expected the shorter period waves ( see the 6-second red line) lose more energy on way to the shallow water than the longer period (see the 12-second blue line) ones. By the final breaking, all the energies are dissipated giving births to other processes ( water level change, nearshore currents and sediment transport). On a 1 to 10 nearshore slope, the SSN for the two cases are 0.5 and 1.1, respectively – indicating a Plunging Breaker type – but at two different scales. On most typical sandy shores, the long period waves are more likely to end up being a Plunging Breaker than the short-period ones. At the final stage, the breaking wave heights are 76% and 79% of water depths for the two cases.. . .Does anyone see another very important conclusion from this exercise? Well this conclusion is that, for any given depth and wave height the long-period waves bring in more energy to the shore than the shorter ones – the period effect. Let us attempt to see more of it at some other time. Some more insights on the surf zone waves are provided by Goda ( Yoshima Goda, 1935 – 2012) describing a graphical method to help determine wave height evolution in the surf zone. His graphs show, for different wave steepness and nearshore slopes, how the maximum and the significant wave heights evolve on way from the deep water to the final breaking – and then to the reformed waves after final breaking. . . . . .- by Dr. Dilip K. Barua, 10 November 2016 In this piece let us talk about another of Ocean’s Fury – the Storm Surge. A storm surge is the combined effects of wind setup, wave setup and inverse barometric rise of water level (the phenomenon of reciprocal rise in water level in response to atmospheric pressure drop). Also important in the surge effects is tide because the devastating disasters occur mostly when the peak surge rides on high tide (the superimposition of tide and storm surge is known as storm tide).The wind setups as a minor contributor to water level rise occur in most coastal water bodies during the periods of Strong Breeze ( 22 – 27 knot; 1 knot = 1.15 miles/h; 1.85 km/h; 0.51 m/s) and Gale Force (28 – 64 knot) winds (see ) – during winter storms and landward monsoons – and are measurable when the predicted tide is separated from the measured water levels. Such setups and seiche (Beaufort Wind Scalestanding wave-type basin oscillation responding to different forcing and disturbances) are visible in water level records of many British Columbia tide gauges.Storms are accompanied by high wave activities, consequently wave setups are caused by breaking waves. Wave setup is the super elevation of the mean water level – this elevation rises from the low set-down at the wave breaker line. Let us attempt to understand all these different aspects of a storm surge – but focusing only on Hurricane ( wind speed > 64 knot) scale storms. . . .I have touched upon the phenomenon of storm surge in the piece on the NATURE page telling about my encounter with the 1985 cyclonic storm surge on Bangladesh coast. Later my responsibilities led me to study and model some storm surges – surges caused by Hurricane ISABELLE (CAT-2, September 18, Ocean Waves2003), Hurricanes FRANCES (CAT-2, September 5, 2004) and JEANNE (CAT-3, September 26, 2004), Hurricane IKE (CAT-2, September 12, 2008) on the U.S. coasts.Some materials of my U.S. experiences are presented and published ( Littoral Shoreline Change in the Presence of Hardbottom – Approaches, Constraints and Integrated Modeling, FSBPA 2009; Sand Placement Design on a Sand Starved Clay Shore at Sargent Beach, Texas, ASBPA 2010 [presented by one of my young colleagues at Coastal Tech]; and Integrated Modeling and Sedimentation Management: the Case of Salt Ponds Inlet and Harbor in Virginia, Proceedings of the Ports 2013 Conference, ASCE). . . .In response to managing the storm effects, many storm prone coastal countries have customized modeling and study tools to forecast and assess storm hazard aftermaths. The examples are the FEMA numerical modeling tool SLOSH (Sea Lake and Overland Surges and Hurricanes) and GIS based hazard effects analysis tool HAZUS (Hazards U.S.). The SLOSH is a coupled atmospheric-hydrodynamic model developed by the National Hurricane Center (NHC) at NOAA. The model does not include storm waves and wave effects as well as rain flooding. NHC manages a Hurricane database HURDAT to facilitate studies by individuals and organizations. WMO-1076 is an excellent guide on storm surge forecasting. . . .What are the characteristics of such a Natural hazard – of the storm surge generating Hurricanes? The Hurricanes ( in the Americas), Cyclones (in South, Southeast Asia and Australia) or Typhoons (in East Asia) are a tropical low pressure system fed by spiraling winds and clouds converging toward the low pressure. Perhaps an outline of some of the key characteristics will suffice for this piece.- Storm systems develop in the Equatorial warm water within a belt of 5 to 20 degree North and South Latitudes in the Atlantic, Indian and Pacific Oceans where water temperature is no less than 27 degree Celsius.
- This warm ocean water mostly occurs during and around the summer season.The followings are the general seasonal distribution of storm formation and strikes in different continental coasts: Hurricanes in the Central-North American East (June – November); Hurricanes in the Central-North American West (May – November); Cyclones in East Africa (October – May); Cyclones in the Indian Subcontinent (April – December); Typhoons in East and Southeast Asia (April – January); and Cyclones in East and West Australia (October – May).
- About 90% of the baby storms do not survive. The rest 10% strengthens to Tropical Depressions to Tropical Storms to Hurricane scales if conditions are right.
- A brief statistics of the global distribution of Hurricane scale storms are: North Atlantic (
*12.1%),*Northeast Pacific (*20.7%),*Northwest Pacific (*35.5%),*North Indian (*4.6%*), Southwest Indian (*10.4%*), Australian Southeast Indian (*7.0%*), and Australian Southwest Pacific (*9.7%*). - As the storms starts to move away it is subjected to the Earth’s rotation – the Coriolis Effect – giving a counterclockwise spin in the Northern Hemisphere and a clockwise spin on the Southern Hemisphere. The significance of this type of rotation is that the storms in the Northern Hemisphere cause high storm surge on the right side of the propagating storm. In the Southern Hemisphere it occurs on the left side.
- If one looks at a monstrous storm from above, it appears as a circle with a
**huge low pressure hole called eye**in the center characterized by calm cloud-free wind. It is circled by the eye-wall of very high speed thunderstorm clouds spinning around. Some typical order of magnitude scales:**eye diameter 30 kilometer, eye wall length 15 kilometer, average storm diameter 600 kilometer.** - The
**Saffir-Simpson**CATEGORY of highest sustained wind speed (*sustained wind speed represents 2 to 10 minute average measured at 10 meter above surface*) and eye pressure:**CAT-1:**64-82 knot, 980 millibar;**CAT-2:**83-95 knot, 965-979 millibar;**CAT-3:**96-112 knot, 945-964 millibar;**CAT-4:**113-136 knot, 920-944 millibar; and**CAT-5:**>136 knot, < 920 millibar. The normal atmospheric pressure is 1000 millibar. - The typical forward speed of a Hurricane is about 10 to 20 knot. This speed and direction is modulated by other wind systems such as a high pressure system. A Hurricane goes through transformation and evolution processes as it moves and can make direct landfall or can move hugging a coastline. In either case, it causes high storm surge and wave activity.
**A slow moving storm is very dangerous because it could sustain high storm and wave activity as well as heavy rainfall in landfalling and contiguous locations for long time.**A storm not only causes havoc in the areas of immediate landfall, its remnants also cause huge damages in far flung areas. - A Hurricane starts to die after making landfall or when moving into the cold water – when its energy is dissipated or when fresh sources of energy are not available.
. . . Before going further an important puzzle needs to be highlighted. Both the short wave and the long storm surge wave are generated by the dynamic pressure or kinetic energy exerted by the speeding wind; and their magnitudes are proportional to the square of the speed (referring again to Daniel Bernoulli, 1700 – 1782). Why are there two different wave types? What are the processes responsible for their formations? The questions may sound naive, but the answers may reveal some valuable insights. The short wave is the water surface response to transporting the gained energy in progressive wave motions. Like the turbulent wind, these waves are highly irregular and spectral.The storm surge waves, on the other hand result from the hydrodynamic balance between the wind-induced water motion and the resistance of that motion by the coast. The result is the piling up of water at the coast – a standing long wave. One should not forget however that the transformation aspects of a long wave – the processes of funneling, resonance (note that a storm surge is not monochromatic, therefore some frequencies may resonate to the basin natural frequency) and shoaling also play a role. These processes affect the storm surge height in wide continental shelves and in closed basins, and are discussed in the piece on this page. To illustrate the storm surge, I have included an image of the Tsunami and Tsunami Forces CAT-1 Hurricane SANDY (October 29, 2012) storm surge on the New Jersey coast.. . .To manage this discussion into a short piece let us focus on a CAT-2 Hurricane characterized by a wind speed of 90 knot and a eye pressure of 970 millibar. Let us attempt to estimate some orders of magnitude of inverse barometric effect, wind setup and wave setup.Inverse barometric effect is often simplistically estimated as: 1 centimeter rise of water level in response to 1 millibar of pressure drop. For the example storm, the pressure drop at the center of the eye is 30 millibar – resulting in a reciprocal water level rise of 30 centimeter. This rise is rather like a moving dome of water having a typical diameter of some 30 kilometer. As one can imagine this simple estimate, however small that may be, cannot be added directly to the wind and wave setups because these two effects occur at the eye wall where wind speed is the highest. . . .How does the wind setup occur? Sustained winds cause a water surface drift current in the direction of the wind. For the example CAT-2 Hurricane, the surface drift current in absence of an obstruction would be about 1.4 meter per second. For a landfalling Hurricane, when the shoreward surface-layer current is obstructed by the coast, water level rises at the coast to balance and cause a seaward bottom-layer current (the generated bottom current erodes and transports sediments seaward changing the morphology of the coastal sea). A simple estimate shows that the example CAT-2 Hurricane would cause a wind setup of 2.7 meter for a 50 kilometer wide continental shelf with an average water depth of 10 meter. To give an idea of the wave setup let us consider a maximum significant wave height of 4.0 meter, and a maximum wave period of 14 second ( these parameters roughly corresponds to those measured during Hurricanes FRANCES and JEANNE near the coast). An estimate shows that the wave setup is about 1.2 meter, about 30% of the wave height.In aggregate the storm surge for the example CAT-2 Hurricane is in the order of some 3 meter – likely less in some areas and more in others. The Hurricanes FRANCES, JEANNE, IKE and ISABELLE registered surge height of about 2.0 meter in some places. What is the periodic scale of a storm surge? For the slow moving Hurricanes like FRANCES and JEANNE, it was about 4 days (from the time of rising to the time falling at the same water level), and for Hurricane IKE it was 1.5 days. Note that the periodic scale of this size covers 2 or more tidal cycles – but the damages mostly occur when the surge peak coincides with high tide (even more so when coincides with spring tide).. . .The complicacy of storm surges can best be described by numerical modeling. But it is also possible to estimate the surge more elaborately as a function of distance and time. Apart from damages, structural destruction and dike overtopping and breaches, storm surges greatly change nearshore and beach morphology providing works for summertime waves to reshuffle and redefine them. What we have discussed so far is positive storm surge that occurs on the right side of a landfalling Hurricane in the Northern Hemisphere. A negative storm surge, popularly known as the Sea Level Blow Out – also occurs simultaneously on the left side. See more in the Frontal Wave Force Field in . Force Fields in a Coastal SystemWe have discussed about the likelihood of enhanced storm activity with on the NATURE page and also on this page. Warming ClimateThe high storminess together with the accelerated Sea Level Rise is only inviting humans to realize the consequences of our actions and face Nature’s Wrath – perhaps to a degree that modern humans have not witnessed before. When I started this piece, I thought of writing it as a small one. Instead, I ended up spending more time on this, resulting in the usual 4 to 5 pages length. Well what can one do when materials are overwhelming? . . . . .- by Dr. Dilip K. Barua, 3 November 2016I have touched upon some of the extreme episodes in the piece on the NATURE page. Tsunami is one of these Nature’s violent wraths that unleash immense trail of casualty and destruction on its path. Our memory is still fresh with the viciousness of havoc of the 2004 and 2011 tsunamis in Indonesia and Japan. Many of us have seen the live coverage of the 2011 Japan tsunami, and I have included a snapshot image (credit: Nature’s Actionanon) of it. Perhaps an image of this kind has given rise to the myth of Noah’s Ark in the ancient past. It is impossible for one not to realize the absolute shock and horror unless one is present on a tsunami scene. Let us try to talk about this interesting topic – tsunami characteristics and the loads they exert on structures standing on its path.Tsunami is one of the rarest natural phenomena that occur with little definitive advanced notices. Tsunamis are caused by earthquake, landslide, volcanic eruption, and by rapid and high drop in atmospheric pressure. Perhaps talking about the first two will suffice for this piece. The first type triggered by underwater earthquakes – often called tsunamigenic earthquakes – causes sudden substantial rupture of the Earth’s crust displacing huge mass of water. The process gives birth to a series of impulsive waves known as tsunami that radiates out directionally from the source. Aspects of tsunamis generated by underwater volcanic eruption – in particular in light of the most recent violent eruption of Tonga on 15 January 2022 – remind us how devastating their effects could be. Tonga eruption is estimated to have measured 6 on the 1 to 8 VEI scale (Volcanic Eruption Index, C Newhall and S Self 1982). The eruption prompted Pacific wide tsunami warning – and really impacted Chile and Peru at more that 10,000 km away. On top of that, record books illustrate the largest and most disastrous tsunami generated by the Indonesian Krakatoa eruption on 27 August 1883 – it is said the tsunami was as high as 40 m – and together with the eruption killed some 36,000 people.. . .A first order simplistic estimate of tsunami height ( trough to crest) and the period (time interval between the two successive crests or troughs) relates these two tsunami parameters logarithmically to the magnitude of earthquake in the Richter scale. For example, a submarine earthquake with a magnitude of 7.5 could generate a 3.6 meter high, 24 minute tsunami at the source. Note that the 2004 tsunami off the Indonesia coast and the 2011 tsunami off the Japanese coast were caused by 9.3 and 9.0 magnitude earthquakes, respectively. The second important cause is the tsunamigenic submarine or terrestrial rapid landslide. Such landslides representing a rigid body motion along a slope-failure surface, are often triggered by earthquakes. In this case, a first order simplistic estimate relates tsunami height directly to the slide volume and sliding horizontal angle, and reciprocally to the water depth of incidence. One example of such a tsunami was the 1975 tsunami that occurred at the head of the Kitimat Arm of the Douglas Channel fjord system in British Columbia. A recent example is the . Palu Indonesia TsunamiA tsunami with a period in the order of 10s of minutes is classified as a long wave or a long-legged wave, and as pointed out in the blog on the NATURE page, such waves occur when wave length is longer than 20 times the local water depth. Both the widths and lengths of crests and troughs of such long-period waves are measurable in scales of kilometers. Like all other waves, they are subjected to the transformation processes as soon as they are born, traveling very fast in deep water. The Ocean Waves piece on this page has highlights of some of the wave transformation characteristics. Let me briefly describe some processes specific to long wave as they enter into the shallow water.Transformation of Waves. . .At least 3 processes of tsunami transformation are important – these are the processes of shoaling, funneling and resonance. The phenomena of shoaling and funneling can best be understood by applying the energy conservation principle, often known as the Green’s Law. This simple principle assuming no losses of energy by friction, etc., shows that for a gradually shoaling continental shelf, the ratio of height increase is proportional to the reciprocal of the ratio of depth decrease raised to the 1/4th power. For a channel gradually decreasing in width, the funneling effect is given by the ratio of height increase that is proportional to the reciprocal of the ratio of width decrease raised to the 1/2nd power. As a simple example of shoaling, a 3.6 meter high wave, will amplify to 6.4 meter as it transforms from 100 to 10 meter water depth.The phenomenon of resonance is quite interesting and intriguing because of its analogy to the force-response dynamics of an oscillating system. Resonance should not be confused with funneling – funneling occurs in the process of balancing the energy, while resonance is the frequency response of the system and occurs due to the reflection of and interaction with the incident wave. Let me try to explain it more based on my paper – Modeling Tsunami and Resonance Response of Alberni Inlet, British Columbia {30th International Conference on Coastal Engineering, World Scientific, 2006. This paper is one of the many papers listed by Prof. Robert L. Wiegel at the University of California, Berkeley as Tsunami Information Sources and published in 2009 in the Science of Tsunami Hazards – the International Journal of Tsunami Society.To get into the core concept of it, one needs to understand the behavior of an oscillating system. Such a system is characterized by a natural frequency at which it resonates to the exciting force – which means that the incident and reflected waves are virtually interlocked with each other, with very high anti-nodal amplification of the incident wave amplitude. In reality however, most natural systems do not respond to such an extent because of frictional damping etc. In addition, experiments with resonant behaviors show that a system also responds by amplification of the exciting wave both at sub- and super-resonant frequencies. This behavior is very important, and let me try to clarify how it explains the tsunami response of the Alberni Inlet.. . .The 1964 Alaska tsunami registered about 1 meter high with a period of 90 minute at the entrance of Alberni Inlet at Bamfield, amplified to 3-times to cause huge damages at the head of the inlet at Port Alberni. The Alberni Inlet is a 65 km long deep fjord that virtually shows no phase lag or amplification in tidal motion . Such a system is very vulnerable because its natural frequency lies within the close range of usual tsunami frequencies. Contrary to the conclusions of previous investigators, my hydrodynamic modeling investigation with Mike21 (courtesy Danish Hydraulic Institute), showed that the 3-times amplification occurred at a sub-resonant frequency – and had there been an incident tsunami close to the resonant frequency, the amplification would have been some 5 times.Like all waves, a small tsunami in deep water shoals to monstrous waves as it propagates into the shallow water. After breaking Tsunami Run-ups flood coastal lands with enormous inbound and outbound speeds causing havoc and destruction. The arrival of Tsunami crest is preceded by the huge draw down or Sea Level Suck Out associated with the Tsunami trough. This phenomenon sucks out things from the shore out into the sea – exposing shoreline features – leaving many aquatic lives stranded in air. It catches offshore boats off-guard – and tragedies happen when people rush out to catch the stranded fishes. See more in the Frontal Wave Force Field in .Force Fields in a Coastal SystemWell, so far so good. Now let us focus our attention on the most important aspect of tsunami effect – the runup ( runup is the vertical height of tsunami propagation above mean water level) and loads on structures. Part of this discussion will be based on my short article published in the 2008 ASCE Journal of Waterway, Port, Coastal and Ocean Engineering {Discussion of ‘Maximum Fluid Forces in the Tsunami Runup Zone’}.. . .Most tsunamis at their births belong to nonlinear wave category – and the degree of nonlinearity increases as they become taller and travel on to the shallow water. At a certain time, the tsunami wave becomes unsustainable as the water particle velocity exceeds the celerity ( square root of the product of acceleration due to gravity and depth), and taking the shape of a solitary wave (a wave that does not have a trough) it breaks generating a very forceful bore that runs up the land. This process turns an oscillatory wave into a translatory one like that of a dam break flood wave. However, the difference between the flood wave and the tsunami translatory wave is that – the tsunami is not a single wave, but rather a series of waves that comes one after another with complicated hydraulic interactions of run ups and run downs.What is the maximum velocity of the tsunami bore? Investigations show that the maximum bore velocity is about 1.4 times the celerity ( up to 2 times at the leading edge; celerity becomes defined in terms of tsunami height as there is no trough), or even higher when the propagating bores become constricted by topography and structures. As an example, a 1 meter high tsunami will likely to generate a 6.3 meter per second velocity at the leading edge, followed by a propagating velocity of 4.4 meter per second. In Froude (William Froude, 1810 – 1879) terms, the wave propagation speed exceeding the celerity is called the supercritical flow. Such flows could travel upstream, cross channels, roads and embankments unleashing the huge energy it posses. A water velocity of this kind has enormous power to destroy structural members and uproot them by scouring.. . .Let us now turn our attention on the forces that tsunamis exert on structures standing on its way. When one thinks about it after witnessing the 2011 Japanese tsunami destructions, it is impossible for one not to wonder about the limitations of human capability in planning and designing measures to withstand the enormous wrath of a tsunami. This feeling arises because Japan is a reportedly tsunami savvy country – perhaps sophisticated in its engineering design standards and codes.Among the limited amount works in this field, a document prepared by US FEMA (Federal Emergency Management Agency) and US NOAA (National Oceanic & Atmospheric Administration) stands out in its comprehensiveness of discussing the problem. It cites the standards of flood resistant designs developed by ASCE/SEI (American Society of Civil Engineers/Structural Engineering Institute).According to the type and nature of forces, tsunami loads are identified as 8 types: **hydrostatic**(*horizontal water pressure if one side is empty, e.g. of a wall*)**;****buoyancy**(*upward force that lets one feel lighter when in water*)**;****hydrodynamic**(*primarily due to horizontal drag of speeding water*);**impulsive or slamming**(*horizontal,**for example by the leading edge of a tsunami slamming on a wall*)**;****floating debris impact**(*horizontal,**tsunamis carry huge amounts floating debris to cause impact);***damming load due to accumulated debris**(*elevated horizontal hydrostatic pressure*)**;****uplift forces**(*upper floors subjected upward force while a tsunami slides and speeds underneath a floor*); and**retained weight of water on upper floors**(*weight causing downward force especially when the down-floor is drained*).
In addition to the enormous forces on structures, tsunamis also erode and scour shallow foundations undermining their stability. The opposite also happens in areas where sedimentation and debris dumps occur. Horizontal wave loads generally arise due to velocity ( drag) and acceleration (inertia). Why only drag force is important for tsunamis? I would like to answer the question based on one of my papers {Wave Load on Piles – Spectral versus Monochromatic Approach, Proceedings, 18th International Offshore and Polar Engineering Conference, Vancouver, ISOPE, 2008}. It turns out that in low frequency (or high period) oscillations including flood waves, the inertial forces diminish in magnitude leaving the horizontal hydrodynamic loading processes on to the drag effect. I hope to talk more about it at some other time.We have talked about tsunami loads on structures located in the runup zone. How about structures – nearshore marine terminals and offshore oil platforms standing in water where a tsunami has not broken? In such cases, one needs to resort to nonlinear wave phenomena to determine the tsunami kinematics and loads.How about the effect of sea level rise (SLR) on tsunami? Well, there are no direct effects. However, with the raised mean water level, tsunamis will be able to travel more inland, and run up higher. The argument becomes clear, if one imagines the 2004 and 2011 tsunamis to occur a century later when the SLR stand is likely to be higher. . . . Here is an anecdote to ponder: The disciple said, “Sir, I am feeling very happy today. Someone greeted me with a smile as I was walking by.”The master looked at his disciple and said, “Well, I am very glad to hear that. You seem to be in the right mood to receive the greeting. Reciprocally, your reaction must have made the greeting person happy.”. . . . .- by Dr. Dilip K. Barua, 6 October 2016This piece is a continuation of the post on the Nature page. Let us attempt to see in this piece how different thoughts are taking shape to face the reality of the consequences of sea level rise (SLR). The consequences are expected across the board – in both biotic and abiotic systems. But to limit this blog to a manageable level, I will try to focus – in an engineering sense – on some aspects of human livelihood in frontal coastal areas – some of the problems and the potential ways to adapt to the consequences of SLR (In a later article Sea Level Rise – the Science, posted in December 2019 – I have tried to throw some lights on the climate change processes of the interactive Warming Climate and EntropyFluid, Solid and Life Systems on Earth – the past, the present and the future). Before doing that let us revisit one more time to point out that all familiar lives and plants have a narrow threshold of environmental and climatic factors, within which they can function and survive. This is in contrast to many microorganisms like Tardigrades, which can function and survive within a wide variation of factors. Therefore adaptation can be very painful, even fatal when stresses exceed the thresholds quickly – in time-scales shorter than the natural adaptation time.. . .What is stress? Global warming and SLR literature use this term quite often. Stress is part of the universal cause-effect, force-response, action-reaction, stress-consequence duo. In simple terms and in context of this piece, global warming is the stress with SLR as the consequence – in turn SLR is the stress with the consequences on human livelihood in the coastal zone. Again, as I have mentioned in the previous piece, the topic is very popular and literature are plentiful with discussions and opinions. The main resources consulted for this piece are the adaptation and mitigation chapters of the reports worked on by: UN entity IPCC ( Intergovernmental Panel on Climate Change), US NOAA (National Oceanic and Atmospheric Administration), and USACE (United States Army Corps of Engineers).Before going further, it will be helpful to clarify the general meanings of some of the commonly used terms – these terms are used in contexts of system responses and reactions – susceptibility, prevention, the ability to adjust to changes, and the ability to cope with repeated disruptions. The first is vulnerability – the susceptibility and inability of a system to cope with adverse consequences or impacts. One can simply cite an example that human habitation, coastal and port infrastructure in low lying areas are more vulnerable to SLR than those lying in elevated areas – and so are the developing societies than the developed ones. The second is known as mitigation – the process of reducing the stresses to limit their impacts or consequences. We have seen in the NATURE page that there could be some responsible for SLR. But scientists have identified that the present accelerated SLR is due to the continuing global warming caused by increased concentration of greenhouse gases. 8 factorsThis anthropogenic factor is in human hands to control; therefore reducing greenhouse gases is one of the mitigation measures. The third is adaptation – the process of adjusting to the consequences of expected or imposed stresses, in order to either lessen or avoid harm, or to exploit beneficial opportunities. This is the primary topic of this piece – how humans could adjust to the consequences of SLR. The fourth is closely related to adaptation and is known as resilience – the ability to cope with repeated disruptions. It is not difficult to understand that adaptation process can become meaningless without resilience.Also to note that a successful mitigation can only work with some sort of adaptation – for example, adaptation by innovating new technologies to control and limit greenhouse gas emission. However, one may often require to compromise or make trade-offs between mitigation and adaptation processes to chalk out an acceptable solution.. . .I have included an image (credit: anon) of human habitation and township developed on a low lying barrier island. Similar developments occur in most coastal countries – some are due to the pressure of population increase, while others are due to the lack of foresight and understanding by regulating authorities. Such an image is important to look at, to reflect on and to think about human vulnerability, and the processes of mitigation, adaptation and resilience to the consequences of SLR. A series of . ASCE Collaborate discussions gives a glimpse of SLR on low-lying airportsBefore going further, some aspects of climate change that exacerbate the SLR effects need to be highlighted. The first is the enhanced wave activity associated with SLR. We all have seen the changing nature of waves sitting on a shoreline during an unchanging weather condition – less wave activity at low tide and enhanced waves as tide rises. Why does that happen? One reason is the depth-limited filtering of wave actions – for a certain depth only waves smaller than about 4/5th of water depth could pass on to the shore without breaking. Therefore, as the water level rises with SLR, the numbers of waves propagating on to the shore increase.The second is the enhanced storminess caused by global warming. All the climate change studies indicate the likelihood of increase in storminess both in intensity and frequency – and perhaps we are witnessing symptoms of that already. High wind storms are accompanied by high storm surges and waves together with torrential rainfall and terrestrial flooding.. . .What are the consequences of SLR? The consequences are many – some may not even be obvious at the present stage of understanding. They range from transgression of sea into the land by erosion, inundation and backwater effect, to salt-water intrusion, to increased forces on and overtopping of coastal waterfront structures. If the 2.0 meter SLR really (?) happens by the end of the 21st century, then it is impossible for one not to get scared. I have included a question mark to the 2.0 meter SLR because of the high level of uncertainty in the predictions by different organizations (see the blog on the NATURE page). For such a scenario, the effective SLR is likely to be no less than 10.0 meter affecting some 0.5 billion coastal population. The effective SLR is an indication of the range of consequences that a mean SLR would usher in. Sea Level Rise – the ScienceLet me try to provide a brief outline of the consequences. First, one should understand that the transgression of sea into the land is not like the invasion by a carpet of water gradually encroaching and inundating the land. It is rather the incremental incidences of high-tide-flooding combined with high wave activity. The result is the gradual net loss of land into the sea through erosion and scour, sediment-morphological readjustments and subsequent submergence. . . . What adaptation to SLR really means? Let us think of coastal population first. Survival instinct will lead people to abandon what they have, salvage what they can, and retreat and relocate themselves somewhere not affected by SLR. Although some may venture to live with water around if technological innovations come up with proven and viable measures. There are many traditional human habitations in seasonally flooded low lands, and also examples of boathouses around the world. Therefore, people’s lives are not so much at stake in direct terms; it is rather the monetary and emotional losses they would incur – losses of land, home and all of their valuable assets and memories. This adaptation process will not be equally felt by all the affected people. Rich people and perhaps many in developed societies are likely to cope with the problem better than the others. Perhaps the crux of the problem will be with civil infrastructure. One can identify them as four major types: - coastal defense structures (e.g.
*dikes,**seawalls, revetments and groins*); - port and harbor infrastructure (e.g.
*breakwaters*,*quays, wharves and jetties, and dolphins*); - city and county utility infrastructure (e.g.
*drainage outlets, bridges and culverts and coastal roads*), and - homesteads.
There is no denying that all these structures standing at the waterfront and in-water will become vulnerable to the water forces high in magnitude and frequency – with enhanced slamming and increased incidences of overtopping and scour. City drainage and disposal outlets will be additionally affected by backwater effects. These stresses are likely to gradually undermine the functionality and failure of structures.Some other aspects of the problem lie with the intrusion of sea saline water into estuaries, inlets and aquifers. Coastal manufacturing and energy facilities requiring freshwater cooling will require adapting to the SLR consequences. The problem will also be faced with the increased corrosion of structural members. . . .What to do with this huge problem of the existing civil infrastructure? What does adaptation mean for such a problem? One may think of renovation and reinforcement, but the likelihood of success of such an adaptive approach may be highly remote. How do the problems translate to the local and state’s economy? What about abandoning them to form submersed reef – declaring them as Water Park or sanctuary? - leaving many as remnants for posterity like the lost city of Plato’s (Greek Philosopher, 423 – 347 BCE) Atlantis? There are no easy answers to any of these questions; but there is no doubt that stakes are very high. We can only hope that such scenarios would not happen.What is important and is being initiated across the board is the formulation of mitigation and adaptation strategies and policies. Such decision making processes rely on the paradigm of risk minimization – but can only be meaningful if uncertainties associated with SLR predictions are also minimized. One important area of work where urgent attention is being paid is in the process of updating and redefining the standards, criteria and regulations required for robust planning and design of new coastal civil infrastructure. However, because of uncertainties in SLR predictions even this process becomes cumbersome. Placing concrete and steel to build anywhere is easy, what is not easy is envisioning the implications and the future.. . .Apart from the 2017 NAP Proceedings – , the 2010 publication – 24847 Responding to the Threat of Sea Level Rise is an excellent read on possible options – as approaches to adapt to different stresses and consequences imposed by climate change. 12783 Adapting to the Impacts of Climate ChangeWell, some thoughts in a nutshell, shall we say? But mitigation of and adaptation to a complex phenomenon like global warming, climate change and SLR must be understood as an evolving process – perfection will only likely to occur as things move forward. And one should not forget that the SLR problems as complex and intimidating as they are, are slow in human terms – therefore the adaptation processes should be conceived in generational terms, but all indications suggest that the thinking and processes must start without delay. Fortunately all members of the public are aware of the problem and are rightly concerned - whether or not things are rolling in the right direction. Hopefully, this will propel leaders to take thoughtful actions. . . .Here is an anecdote to ponder: The disciple said, “Sir, adaptation is a very strong word perhaps more than what we think it is.”The master replied, “Yes, it does have a deeper meaning. One is in the transformation of the personalities undergoing the adaptation processes. It is a slow and sequential process ensuring the fluidity of Nature and society and evolution. If one thinks about the modern age of information, travel and immigration, the process is becoming much more encompassing – perhaps more than we are aware of. But while slow adaptation is part of the natural process, the necessity of quick adaptation can be very painful and costly.”. . . . .- by Dr. Dilip K. Barua, 29 September 2016With this piece I am breaking my usual 3-3-3 cycle of posting the blogs on the NATURE, SOCIAL INTERACTIONS and SCIENCE & TECHNOLOGY pages. The reason is partly due to the comment of one of my friends who said, it is nice reading the Nature and Science & Technology pages. Hope you would share more of your other experiences in theses pages. Well, I have wanted to that but in the disciplined order of 3-3-3 postings. But this does not mean that the practice cannot be changed to concentrate more on the technical pages. Let us see how things go – a professional experience spanning well over 3 decades is long enough to accumulate many diverse and versatile humps and bumps, distinctions and recognitions, and smiles and sadness. . . .This piece can be very long, but I will try to limit it to the usual 4 to 5 pages starting from where I left – some of the model basics described in the blog on the Natural EquilibriumNATURE page, and in the blog on this page. To suit the interests of general members of the public, I will mainly focus on the practical aspects of water modeling rather than on numerical aspects. A Collaborate-ASCE discussion link highlights aspects of water modeling fundamentals and issues.Common Sense HydraulicsThe title of this piece could have been different – numerical modeling, computational modeling, hydrodynamic modeling, wave modeling, sediment transport modeling, morphological modeling . . . Each of these terms conveys only a portion of the message what a modeling of Natural water means. The Natural waters in a coastal environment are 3-dimensional – in length, width and depth, subjected to the major forces – externally by tide and wave at the open boundaries, wind forcing at the water surface and frictional resistance at the bottom. The bottom can be highly mobile like in alluvial beds, or can be relatively fixed like in a fjord. Apart from these regular forcings coastal waters are also subjected to extreme episodes of storm surge and tsunami.. . .While the Natural coastal setting is 3-dimensional, it is not always necessary to treat the system as such in a model. Depending on the purpose and availability of appropriate data, coastal systems can be approximated as 2-dimensional or 1-dimensional. The 2-dimensional shallow-water approximation is possible especially when the aspect ratio (depth/width) is low. When a channel is very long and the aspect ratio is relatively high, it can even be modeled as 1-dimensional.Apart from these dimensional approximations, some other approximations are also possible because all terms of the governing equations do not carry equal weight. I have tried to highlight how to examine the importance of different terms in a conference presentation ( A Dynamic Approach to Characterize a Coastal System for Computational Modeling and Engineering. Canadian Coastal Zone Conference, UBC, 2008). The technique known as scale analysis lets one to examine a complicated partial differential equation by turning it into a discrete scale-value-equation. My presentation showing the beauty of scale analysis is highlighted for the governing hydrodynamics of fluid motion – the Navier-Stokes Equation (Claude-Louis Navier, 1785 – 1836 and George Gabriel Stokes, 1819 – 1903). It was further demonstrated in my Encyclopedia article, for practical workable solutions. Seabed Roughness of Coastal WatersThe technique can also be applied for any other equation – such as the integral or phase-averaged wave action equation, and the phase resolving wave agitation equation. Many investigators deserve credit for developing the phase-averaged wave model – which is based on balancing the wave energy-action. The phase-resolving wave model is based on the formulation by Boussinesq – the French mathematician and physicist Joseph Valentin Boussinesq (1842 – 1929). The latter is very useful in shallow-water wave motions associated with non-linearity and breaking, and in harbors responding to wave excitation at its entrance. . . .To give an idea about the model simulated results, I have included an image taken from one of my Power Points ( Littoral Shoreline Change in the Presence of Hardbottom – Approaches, Constraints and Integrated Modeling) presented at the 22nd Annual National Conference on Beach Preservation Technology, St. Pete Beach, Florida, 2009 on behalf of Coastal Tech. It is a depiction of model-simulated south-bound longshore currents that could develop during an obliquely incident storm wave from the northeast. The incident wave is about 4 meter high generated by a Hurricane Frances (September 5th 2004) like storm on the Indian River County shores in Florida. . . . Water modeling is fundamentally different and perhaps more complex – for example, from structural stability and strength modeling and computations. This assertion is true in a sense that a water model first aims to simulate the dynamics of Natural flows to a reasonable level of acceptance, before more can be done with the model – to use it as a soft tool to forecast future scenarios, or to predict changes and effects when engineering interventions are planned.Water modeling is like a piece of science and art, where one can have a synoptic view of water level, current, wave and sediment transport, and bed morphology within the space of the model domain simultaneously – this convenience cannot be afforded by any other means. If the model results are animated, one can see how the system parameters evolve in response to forces and actions – this type of visuals is rather easy and instructive for anyone to understand the beauty and dynamics of fluid motion. For modelers, the displays elevate his or her intuition helping to identify modeling problems and solutions. . . . Before going further, I would like to clarify the two terms I have introduced in the blog on the NATURE page. These two terms are Coastal River Deltabehavioral model and process-based model. Let me try to explain the meaning of these two terms briefly by illustrating two simple examples. The simple example of a behavioral model is the Bruun Rule or the so-called equilibrium 2/3rd beach profile, proposed by Per Bruun in 1954 and refined further by Bob Dean (Robert G. Dean, 1931 – 2015) in 1977. The relation simply describes a planer (no presence of beach bars) beach depth as the 2/3rd power of cross-shore distance – without using any beach-process parameters such as the wave height and wave period. The only other parameter the Rule uses is the sand particle settling velocity. This type of easy-to-understand behavioral models that does not look into the processes exciting the system, exists in many science and engineering applications. The behavioral models capture response behaviors that are often adequate to describe a particular situation – however they cannot be applied or need to be updated if the situation changes.The simple example of a process-based model is the Chezy Equation (Antoine Chezy, 1718 – 1798) of the uniform non-accelerating flow – that turns out to have resulted from balancing the pressure-gradient force against the frictional resistance force. In this relation velocity of flow is related to water depth, water level slope (or energy slope) and a frictional coefficient. The advantage of a process-based model is that it can be applied in different situations, albeit as an approximation in which it has been derived. . . . Let us now turn our attention to the core material of this piece – the numerical water modeling. The aspects of the scale modeling used to reproduce water motion in a miniature replica of the actual prototype in controlled laboratory conditions are not covered in this piece. These types of models are based on scale laws by ensuring that the governing dimensionless numbers are preserved in the model as in the prototype. I have touched upon a little bit of this aspect in the piece on this page.Common Sense HydraulicsI have been introduced to programming and to the fundamentals of numerical water modeling in the academic programs at IHE-UNESCO and at the USC. My professional experience started at the Land Reclamation Project (LRP) with the supports from my Dutch colleagues and participating in the hydrodynamic modeling efforts of LRP. Starting with programmable calculators, I was able to develop several hydraulic processing programs and tools – later translating them to personal computer versions. I must say, however that my knowledge and experience have really taken-off and matured during my heavy involvement with numerical modeling efforts in several projects in Canada, USA and overseas. This started with a model selection study I conducted with UBC for the Fraser River in British Columbia. A little brief on my modeling experiences. They include the systems: 8 in British Columbia, 1 in Quebec, 1 in Newfoundland and Labrador, 2 in Florida, 1 in Texas and 1 in Virginia. Among the modeled processes were hydrodynamics, wave energy actions, wave agitations, coupled wave-hydrodynamics, and coupled-wave-hydrodynamics-sediment transport-morphologies. The model forcings were tide, wind and wave, storm surge and tsunami. I will try to get back to some of the published works at some other time. Perhaps it is worthwhile to mention here that modeling experience is also a learning exercise; therefore one can say that all hydraulic engineers should have some modeling hand-on experiences, because they let him or her to acquire very valuable insights on hydraulics – simply using available relations to compute forces and parameters may prove incomplete and inadequate. . . .In the piece on this page, some unavoidable limitations and constraints of models are discussed. Let me try to outline them in some more details. Model uncertainties can result from 8 different sources: Uncertainty and Risk**representitiveness**(*difference between the real and the modeled hydraulic situations*)**;****empiricism**(*weak relations embedded into the model formulation*);**discretization of the continuum**(*unavoidable but minimizable*)**;****iteration to convergence**(*when the solution residuals could not be completely eliminated*);**rounding-off**(*when machine calculation rounds-off up to certain digits*)**;****application**(*erroneous data usage in developing and running the model*)**;****modeler**(*when the modeler has poor understanding of the processes he or she is modeling, and of the model theoretical basics*); and**the numerical code**(*codes contain thousands of lines and subroutines, therefore it is not unlikely that inadvertent errors creep in*).
The types of uncertainties indicate that the onus of model performance falls on the shoulders of three: the software being applied, the constraints associated with the quality of data, and the competence of the modeler. Some of the described uncertainties are better managed and minimized in professional commercial software compared to the academic and freely available ones. In addition, commercial software comes with well-developed customized pre- and post-processing tools, thus saving the time and effort of a modeler. Data quality has deeper meanings; it not only means the data accuracy, but also data coverage in space and time – in length and resolution.. . .Before a model is ready for application, it requires going through a process of validation. This process of comparing model outputs against corresponding measurements leads to tuning and tweaking of parameters to arrive at acceptable agreements. It is also reinforced with sensitivity analysis to better understand the model responses to parameter changes. discusses some important steps on model verification and validation. A National Academies PublicationI have often been asked whether water modeling is worth the efforts and costs. My univocal answer to the question is yes. In this age of quickly improving digital computations, displays, animations and automations, it would be a shame if one thinks otherwise. Science and engineering are not standing still – the capability of numerical models is continually being refined and improved at par with the developments of new techniques in the computing powers of digital machines.Like all project phases, a water model can be developed in phases – for example, starting with a course and rough model with the known regional data. Such a preliminary model developed by experienced modelers can be useful to develop project concepts and pre-feasibilities, and can also help planning measurements for a refined model required at subsequent project phases. We have tried to conclude that a model is a soft tool; therefore its performance in simulation and prediction is not expected to be exact. This means that one should be cautious not to oversell or over-interpret what a model cannot do. But even if a water model is not accurate enough to be applied as a quantitative tool, it can still be useful for qualitative and conceptual understandings of fluid motion, in particular as a tool to examine the effectiveness and effects of engineering measures under different scenarios. . . . Here is an anecdote to ponder: The disciple asked the master, “Sir, what does digitization mean to social fluidity or continuity?”The master replied, “Umm! Imagine a digital image built by many tiny pixels to create the totality of it. Each of these pixels is different, yet represents an essential building block of the image puzzle. Now think of the social energy – the energy of the harmonic composite can similarly be high and productive when each building block has the supporting integrity and strength.”. . . . .- by Dr. Dilip K. Barua, 22 September 2016In the blog on this page, I have pointed out that the first order solution to a hydrodynamic problem is governed by the actives forces of excess water pressure and gravitational pull resisted by the reactive friction force. It only makes sense that I devote this piece as a follow up on the reactive friction force. This force is caused by the resistance of a solid boundary to the fluid flow of mass and energy on it. Reactive forces are notoriously nonlinear, not only because of the fluid behavior itself, but also because of the mobility of the bed if it happens to be alluvial.Common Sense HydraulicsWhy understanding resistance to flow is important? One reason is that the first order solution, either in desktop analysis or in computational modeling cannot be achieved unless the reactive resistance is accurately understood and parameterized. In addition, without an accurate first order solution that dominates processes, advanced order solutions are unachievable. The second reason is that the stability and functions of a shoreline, and waterfront and maritime structures can be jeopardized if the understanding of erosion-sedimentation processes remains questionable. . . . Before entering into the discussion of the topic, I am tempted to add a few lines on the history of coastal engineering. The official recognition and definition of it was launched at the First Conference on Coastal Engineering held in Long Beach, California in 1950. The conference proceedings contributed by some invited speakers gave birth to this new discipline of civil hydraulic engineering. In time, the discipline metamorphosed into several sub disciplines – not so much in an orderly fashion but rather in a confusing manner.Apart from journal papers and conference proceedings, the initiative was followed by some remarkable books. Among them were three outstanding ones. The first was a volume { Estuary and Coastal Hydrodynamics. McGraw-Hill, 1966} edited by Arthur T. Ippen (1907 – 1974). The second was an initiative taken by the American Society of Civil Engineers (ASCE) that resulted in a volume {Sedimentation Engineering. ASCE, 1974} edited by Vito A. Vanoni (1904 – 1999). Contributed by outstanding scholars from around world, these two publications set the scientific background on which many future works were built upon. Another publication { Shore Protection Manual (SPM), Vol. I and II} dealing with the guidelines on practical applications was initiated by U. S. Army Coastal Engineering Research Center (CERC). First published in 1973, these two SPM volumes soon became very popular with practicing engineers, and CERC continued issuing new editions, the last of which was in 1984. In later years, SPM reincarnated into the diverse and multiple Coastal Engineering Manuals (CEM). During and following the publications of these three volumes, universities and research institutions of many different countries made significant contributions: notable among them were Delft Hydraulics, HR Wallingford, and Japan. An edited book, { } by Nick Kraus published in 1996 chronicled the development of the discipline.History and Heritage of Coastal Engineering. . .Well, enough on the historical context for now. Let us try to focus on the topic of this piece. But before doing so, a very important concept requires clarification - and this concept is regarding the boundary layer. This layer of reduced flow velocity develops closed to the bed – from zero near the bed to the asymptote of free stream velocity up in the water column. The reduced velocity is mainly caused by the shearing resistance of the bed, and the loss of fluid flow energy in eroding and transporting alluvial sediments. The layer divided primarily into two is known as the boundary layer. A very thin viscous sub-layer occurs near the bed with a turbulent upper layer above. Interactions between the fluid shearing force and the bottom reactive force within this layer account for viscous and turbulent transfer of momentums within the water column. Described by an asymptotic logarithmic or a power function, the height of this layer changes in response to the change in flow velocities and roughness of the bed.Among other investigators, my own works { Some Aspects of Turbulent Flow Structure in Large Alluvial Rivers. Journal of Hydraulic Research, Taylor & Francis, 1998} for the Flood Action Plan – River Survey Project, provide some insights into the flow structure of the boundary layer. The investigation showed that the bed-generated turbulence in the presence of high dune-scale bedforms reached the maximum above the bed at a height of 5 to 10% of water depth, with decaying of the strength from above that level to up in the water column. A bedform is a wavy undulation, mostly noticeable in a sandy bed, which is primarily asymmetric in unidirectional flows and symmetric in oscillatory short-wave environments. The image shown in this piece gives a snapshot of some small-scale bedforms. . . .How to characterize the bed resistance? The question can best be answered using the theory proposed by Daniel Bernoulli (1700 – 1782). His formulation shows that dynamic water pressure or kinetic energy is defined in terms of V^2, V being the mean flow velocity of the current. It is this dynamic pressure that is responsible for exerting drag on the bed. We will find out in later discussions that it is also this dynamic pressure that causes drag force on structures in water. Many, including one of the pioneering investigators Ralph Alger Bagnold (1896 – 1990) used this drag to formulate the bed shear force or the equivalent bottom reactive force and sediment transport. The formulation known as quadratic friction law defines this force as the product of a non-dimensional drag coefficient, water density and square of the current velocity.The quadratic friction law is universally applicable in unidirectional flow such as in river as well as in oscillatory long-wave motions such as tide, tsunami or storm surge. The drag coefficient is related to its counterparts of other known resistance coefficients such as Chezy (Antoine Chezy, 1718 – 1798) coefficient C, Manning’s (Robert Manning, 1816 – 1897) n, and Darcy-Weisbach friction factor f. To give an idea, it can be shown that in a water depth of 10 m, and a representative bed-material size of 3 mm, the drag coefficient is 0.0014, C = 84.13 m^(1/2)/s, n = 0.0175 (s/m^(1/3)) and f = 0.0111. See more in .Seabed Roughness of Coastal WatersWhen deformation of the alluvial bed occurs in high flow stages, additional resistance to flow is imposed. The migrating bedforms varying from ripples to large dunes are mostly asymmetric with a flatter stoss slope and steeper avalanche slope. In most instances, the larger the bedform, the larger is the resistance to flow. In one investigation { Bedform Dynamics and Sediment Transport – Report of an Investigation in the Jamuna River. Institution of Engineers Bangladesh, Paper #41-4-06, 1996} I worked on; it was found that bedforms accounted for 70% of the total bed resistance. . . .The bed resistance discussed so far is applicable for unidirectional flow, and for tide, tsunami and storm surge. How about the bed resistance to short-wave oscillatory motions? The quadratic friction law is similarly applicable in short-wave motions with the applicable velocity taken as the amplitude of bottom orbital velocity; and the drag coefficient is renamed as a wave friction factor. The wave orbital velocity is a bidirectional vector with a peak on either direction. This peak is the amplitude. In addition, a relation proposed by Swart { Offshore sediment transport and equilibrium beach profiles. Delft Hydraulics Publication No. 131, 1974} shows that the amplitude of the bottom orbital excursion affects the wave friction factor. As an example, at a water depth of 10 m, and a representative bed-material size of 3 mm, the friction factor is 0.0175 for a 2-meter 10-second wave. Similar to unidirectional flow, presence of bedforms which are mostly symmetric in short-wave environments, the frictional resistance is enhanced.In a hydraulic environment where both long and short waves are active, the wave-current friction factor accounts for both. The resistances triggered by both the sources are added in some fashion that depends on the relative magnitudes of current and the amplitude of bottom wave orbital velocity. Well, I am not sure whether I have managed to explain the topic in plain English as promised. I have tried to keep things as simple as possible with only some limited but unavoidable use of scientific jargon. One more thing. It is important to mention that the materials covered in this piece are highly empirical, which means that there is no single value of coefficients applicable for all different cases, and for all hydraulic environments. Empirical coefficients are usually valid in orders of magnitudes, and they require verification to examine their applicability for specific cases. If that is not possible their limitations and uncertainties should be highlighted. . . . Here is an anecdote to ponder: The disciple asked the master, “Sir, how would you see the resistance to flow in a wider social context?”The master smiled, “Well, the cause-effect, force-response or action-reaction duo is universally present everywhere. Things are even more complicated in a society than the complexities of fluid flow. What we are talking about here are positive actions or endeavors, and reactions of them. We used to hear from elders in our childhood that life was full of thorns. So resistances are there no matter how we don’t want them.”“Could you elaborate please?”“In a society, resistance could come from individuals or from a collective bunch. But while a reactive resistance in natural fluid flows is instantaneous, the same in social interactions could be instantaneous, delayed or even absent. If delayed, the reactive resistance can metamorphose into something different and subdued. Therefore it is often helpful to delay high emotional reactions. But in a society not all resistances are reactive. Some are active resistances which could either mean something good, or could mean something malicious. Some malicious resistances could even reach the scale of an obstacle.”“Thank you Sir. When do you think active social resistances can become damaging.” “Well, the effects could become very frustrating, ugly and damaging when a society promotes, or is founded upon division, mistrust and conflict.”. . . . .- by Dr. Dilip K. Barua, 11 August 2016 |