ie.geologyidea.com
More

Use ST_Buffer with a conditional distance based on an attribute

Use ST_Buffer with a conditional distance based on an attribute


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.


I am quite new to postgres/postgis and are still learning basic skills. I have imported a pointlayer shapefile into my postgres database.

I need to make a buffer of varying radius around the points depending on attribute. I have been reading about the ST_buffer function in the documentation but I am having problems to interpreting it so far.

I am trying to use SELECT geometry ST_Buffer(geometry g1, float radius_of_buffer);

I have tried SELECT geometry ST(mypoints.geom, 100) FROM myschema.mypoints

I hope someone could help me to write it out properly

Best Regards


If you want a conditional buffering based on some attribute, you can use a case statement, eg,

SELECT ST_Buffer(geom, CASE WHEN atr = 0 then 10 WHEN atr=1 THEN 20 ELSE 30 END) FROM mypoints;

Obviously, you can have as many WHENs as you like, though it could get cumbersome fast. If it is a straight multiplier, then you can just doST_Buffer(geom, atr * 2), for example.


Just set your scope variable to whatever condition you want!

Actually I posted "required" just as an example. In my particular case it's a custom attribute and the presence of it whether the value of true or false affects the functionality. So I need to not include the attribute depending on the condition.

if this is in reference to empty HTML5 attributes and removing the attribute itself to avoid a boolean true condition then:

  • remove an attribute with: angular.element(DOMelement).removeAttr(attribute)
  • read an attribute with angular.element(DOMelement).attr(attribute)
  • set an attribute with angular.element(DOMelement).attr(attribute, value)

Note: all element references in Angular are always wrapped with jQuery or jqLite they are never raw DOM references.

hope this helps - GL! EDIT1: Here's an SO discussion about DOM manipulation and Angularjs here's another


1 Answer 1

This is a rather simple CHECK constraint:

The logic behind the code is that the logical restriction if a then b is written in boolean logic as (not a) or (b) . May seem counter-intuitive at first look but if you write the possible boolean combinations it works out.

The constraint could also be written as CHECK ( NOT (attribute AND number IS NULL) ) which may seem a bit more explanatory ("do not allow attribute to be false and number to be null at the same time"). Pick whatever is more readable to you.


2 Answers 2

The major issue in your task is, that a gaussian blur shader typically operates in an post process. In common it is applied to the entire viewport after all the geometry has been drawn. The gaussian blur shader takes a fragment of a framebuffer (texture) and its neighbours and mix there color by an gaussian function and stored the new color to a final framebuffer. For this the drawing of the entire scene (all points) has to be finished before.

But you can do something else. Write a shader, that draws the points completely opaque at its center and completely transparent at its exterior border.

In the vertex shader you have to pass the view space vertex coordinate and the view space center of the point to the fragment shader:

In the fragment shader you have to calculate the distance from the fragment to the center of the point. To do this you have to know the size of the point. The distance can be used to calculate the opacity and the opacity is the new alpha channel of the point.

Add a uniform variable strokeWeight and set the uniform in the program. Note, because the points are transparent at its borders, the look smaller. I recommend to increase the size of the points:

You are drawing partly transparent objects. To achiev a proper blending effect you should sort the points by ascending z coordinate:

The full example code may look like this:

Of course it is possible to use the gaussian blur shader too.

The gaussian blur shader, which you present in your question, is a 2 pass post process blur shader. This means it has to be applied in 2 post process passes on the entire viewport. A pass blurs along the horizontal, the other pass blurs along the vertical axis.

To do this you have to do the following steps:

  1. Render the scene to a buffer (image)
  2. Apply the vertical gaussian blur pass to the image and render the result to an new image buffer
  3. Apply the horizontal gaussian blur pass to the result of the vertical gaussian blur pass

A code listing, which uses exactly the shaders from your question may look like this:

For an approach, which combines the 2 solutions, see also the answer to your previous question: Depth of Field shader for points/strokes in Processing

Further a point shader is needed for drawing the points:

The 2 pass depth of field, gaussian blur shader looks like this:

In the program you have to do 4 stages: 1. Render the scene to a buffer (image) 2. Render the "depth" to another image buffer 3. Apply the vertical gaussian blur pass to the image and render the result to an new image buffer 4. Apply the horizontal gaussian blur pass to the result of the vertical gaussian blur pass


Can you put two conditions in an xslt test attribute?

It does have to be wrapped in an <xsl:choose> since it's a when. And lowercase the "and".

Like xsl:if instructions, xsl:when elements can have more elaborate contents between their start- and end-tags—for example, literal result elements, xsl:element elements, or even xsl:if and xsl:choose elements—to add to the result tree. Their test expressions can also use all the tricks and operators that the xsl:if element's test attribute can use, such as and, or, and function calls, to build more complex boolean expressions.

Maybe this is a no-brainer for the xslt-professional, but for me at beginner/intermediate level, this got me puzzled. I wanted to do exactly the same thing, but I had to test a responsetime value from an xml instead of a plain number. Following this thread, I tried this:

which generated an error. This works:

Don't really understand why it doesn't work without number(), though. Could it be that without number() the value is treated as a string and you can't compare numbers with a string?

Anyway, hope this saves someone a lot of searching.


1 Answer 1

In order to use conditional visibility with custom LWC components, there are two things that need to happen.

  1. Dispatch the FlowAttributeChangeEvent when you want to notify the flow runtime that a change has been made.
  2. Use automatically assigned variables in the conditions that need to be notified of the change.

Any screen input component with Manually assign variables (advanced) selected isn’t available as a resource for conditional visibility on the same flow screen.

If you check the Manually assign variables (advanced) checkbox and map a variable resource to one of your outputs, and then use that variable resource in your condition, it won't work. The reason is that the actual value of the variable resource is not set until the information is sent back to the server (e.g. when the Next button is clicked). Only the values for actual screen components will trigger the check.

For example, in the image below, the Product Picker LWC component with the API name Bundle is using automatic variable assignment: the Manually assign variables (advanced) checkbox is unchecked. The Product Picker component has an attribute value . Another component has a condition that checks whether is null. This will work when the two components are on the same screen.

In this image, the Product Picker component with the API name ManualBundle has the Manually assign variables (advanced) checkbox checked, and the Value attribute set to a variable resource . The condition checks whether is null. This condition will not work if both components are on the same flow screen. The condition would only work if the component with the visibility check is on a later screen in the flow, because is not actually set until it's sent to the server-side engine

Adding an Aura or LWC component to a flow screen in Winter '20 will have automatic variable assignment enabled by default.


Geographic Analysis Explained through Pokemon GO

Hello, pokemon trainers of the World! Today, I would like to explain Geographic Analysis using the ideas of the Pokemon GO game that you know only too well. I hope that you will return to the game with a good understanding of the geographic concepts and the geospatial technology behind it.

Safe for some serious cheating, you have to move around this thing called THE REAL WORLD with your location-enabled device in order to “catch’em all”. Smartphone producers make it really difficult to manipulate GPS location, because it is such a critical function of your device. So, unless you are truly close to that poke stop, you won’t be able to access its resources: free poke balls, razz berries, etc. In Geography, we often study the location of points-of-interest or services. For example, if you live or work close to a specific shopping mall or hospital, you are likely to use their services at one point or another. Or, if you are far away from a college or university and still choose to pursue higher education, you may have to move in order to be within reach of that institution.

To use a poke stop or gym, or to catch a pokemon, you do not need to be at their exact coordinate locations, but you need them to appear within your proximity circle as you move around. In Geographic Analysis, we often examine this “reach”, or catchment area, that is defined by proximity to locations of interest. For example, when a coffee chain looks to open a new store, Geographers will examine their competitors’ locations and surrounding neighbourhood profiles to determine whether there is a gap in coverage or whether there are catchment areas that include enough people of the right demographic to support an additional cafe. In Retail Geography, we call these areas “trade areas”. That’s why you can find clusters of Tim Horton’s, Second Cup, and/or Starbucks at major intersections where the geodemographics are favourable – yes, this is likely a Geospatial Analyst’s work! And that’s also why you can find clusters of poke stops in some of your favourite busy locations.

To support business decision-making, AKA “location intelligence”, Geographers use data on population, household incomes and employment, the movement of people, and the built environment. If you have ever “watched” pokevision.com for different locations, you will have noticed great variation in the pokemon spawn density and frequency. For example, in our screenshots below you can see tons of pokemon in downtown Toronto, but not a single one in an area of rural Ontario. Similarly, there are dozens of poke stops and several gyms within walking distance in the City but a lone poke stop in rural Ontario. The Pokemon GO vendor, Niantic, seems to be using geodemographics in determining where pokemon will spawn. They make it more likely for pokemon to spawn where there are “clients”: that is, yourselves, the trainers/players.

(a) (b) (c)

Fig. 1: poke stops locations and pokemon appearances in downtown Toronto (a, b), compared to rural Ontario (c)

Geographic space is a unique dimension that critically influences our lives and societies. The spatial distribution of people and things is something that Geographers are studying. Just like the spawning of pokemon in general, the appearance of the different types of pokemon is not randomly distributed either. For example, it has been shown that water-type pokemon are more likely to appear near water bodies. See all those Magicarps near the Toronto lakefront in the screenshot below? A few types of pokemon even seem restricted to one continent such as Tauros in North-America and won’t appear on another (e.g., Europe). The instructions by “Professor Willow” upon installation of the app actually refer to this regional distribution of pokemon. I also believe that the points-of-interest, such as buildings, that serve as poke stops, determine the pokemon type spawning near them. For example, the Ontario Power Building at College St. and University Ave. in Toronto regularly spawns an Elektrabuzz, as shown in the last screenshot below.

(a) (b) (c)

Fig.2: (a), “Professor Willow” explaining his interest in studying the regional distribution of pokemon (what a great-looking Geographer he is!) screenshots of pokevision.com with (a) Magicarps at the Toronto lakefront and (b) an Elektrabuzz near the Ontario Power Building

In Environmental Geography, we often analyze (non-pokemon) species distribution, which is also not random. The availability of suitable habitat is critical, just like for pokemon. In addition, spatial interactions between species are important – remember the food chain you learned about in school. I am not sure that different pokemon types interact with one another maybe that could be the topic of your first course project, as you enter a Geography program at university?

The techniques that we use within Geographic Information Systems (GIS) include suitability mapping, distance and buffer analysis, and distance decay. Distance decay means that it is becoming less and less likely to encounter a species as you move away from suitable habitat. Or in the business field, it is becoming less and less likely that people will shop at a specific mall the further away they live from it. A buffer is an area of a specified distance around a point, line, or polygon, just like the proximity circle around your pokemon avatar. GIS software can determine if other features are within the buffer around a location. Instead of enabling access to poke stops or gyms around your avatar, Geographers would use buffer analysis to determine which residents have access to public transit, e.g. if they are within walking distance of 500m or 1km of a transit stop.

A final thought about how Pokemon GO has brought Geography to the headlines concerns important professional and societal challenges that Geographers can tackle. These range from map design and online map functionality to crowdsourcing of geospatial data, as well as the handling of big data, privacy concerns, and ultimately the control of people’s locations and movement. The now-defunct pokevision.com Web map used Esri online mapping technology, one of the world-leading vendors of GIS software and promoters of professional Geography. Another approach, which is used by pokemonradargo.com, has trainers (users) report/upload their pokemon sightings in real-time. This geospatial crowdsourcing comes with a host of issues around the accuracy of, and bias in, the crowdsourced data as well as the use of free labour. For example, poke stops were created by players of a previous location-based game called “Ingress” and are now used by Niantic in a for-profit venture – Pokemon GO! Finally, you have all read about the use and misuse of lure to attract people to poke stops at different times of day and night. The City of Toronto recently requested the removal of poke stops near the popular island ferry terminal for reasons of pedestrian control and safety. Imagine how businesses or government could in the future control our movement in real space with more advanced games.

I hope I was able to explain how Pokemon GO is representative of the much larger impact of Geography on our everyday lives and how Geographers prepare and make very important, long-term decisions in business and government on the basis of geospatial data analysis. Check out our BA in Geographic Analysis or MSA in Spatial Analysis programs to find out more and secure a meaningful and rewarding career in Geography. And good luck hunting and training more pokemon!


1 Answer 1

I'm not so keen on the total_memory idea it's somewhat of an indirect quantity. Maybe it makes more sense in the context of whatever the magical daemon is though.

For tunable attributes like total memory, I pretty much would do as you have laid out in the question by adding a sensible default value to attributes/default.rb (cuts down on support questions when someone forgets to explicitly set a value) and override with environment, role or node-specific values where necessary.

It's possible to do arithmetic inside the ERB file like this:

Ohai makes available the statistics from free(1) which includes the total memory in kB. node['memory']['total'] = '12312432kB' on my workstation.

I also try and use attributes with the lowest priority as much as possible, i.e prefer default over normal attributes, and normal attributes over override attributes. So,

  • choose a sensible recipe default where possible
  • use a default environment attribute (you use an override attribute in the example)
  • use a role attribute for groups of nodes (again you use an override attribute)
  • and finally default node attribute

See the attribute precedence link in the Chef wiki) for the order in which attributes override each other.

Using the lowest precedence default attributes where possible allows you to set the attribute value depending on the environment, role and node, but frees up the upper levels of precedence when you need to do something tricky.


Conclusions

Malaria is a focal disease, and even this small settlement area presented heterogeneity in the spatial distribution of incidences. These patterns are less related to the natural environment per se, than caused by land use, landscape modification due to human activities in the settlement and the proximity of individuals to places with elevated vector presence.

In a high malaria risk area, GIS and logistic regression could be successfully applied to predict relative likelihood of disease infection, which is positively related principally to proximity of gold mining areas and elevated nearby mining areas and, secondarily, to intense land use, lower vegetation density and higher soil humidity.

Findings on the relationship between malaria cases and environmental factors should be applied in the future for land use planning in rural settlements in the Southern Amazon to minimize risks of disease transmission.


Use ST_Buffer with a conditional distance based on an attribute - Geographic Information Systems

The STATISTICS table provides information about table indexes.

Columns in STATISTICS that represent table statistics hold cached values. The information_schema_stats_expiry system variable defines the period of time before cached table statistics expire. The default is 86400 seconds (24 hours). If there are no cached statistics or statistics have expired, statistics are retrieved from storage engines when querying table statistics columns. To update cached values at any time for a given table, use ANALYZE TABLE . To always retrieve the latest statistics directly from storage engines, set information_schema_stats_expiry=0 . For more information, see Section 8.2.3, “Optimizing INFORMATION_SCHEMA Queries”.

If the innodb_read_only system variable is enabled, ANALYZE TABLE may fail because it cannot update statistics tables in the data dictionary, which use InnoDB . For ANALYZE TABLE operations that update the key distribution, failure may occur even if the operation updates the table itself (for example, if it is a MyISAM table). To obtain the updated distribution statistics, set information_schema_stats_expiry=0 .

The STATISTICS table has these columns:

The name of the catalog to which the table containing the index belongs. This value is always def .

The name of the schema (database) to which the table containing the index belongs.

The name of the table containing the index.

0 if the index cannot contain duplicates, 1 if it can.

The name of the schema (database) to which the index belongs.

The name of the index. If the index is the primary key, the name is always PRIMARY .

The column sequence number in the index, starting with 1.

The column name. See also the description for the EXPRESSION column.

How the column is sorted in the index. This can have values A (ascending), D (descending), or NULL (not sorted).

An estimate of the number of unique values in the index. To update this number, run ANALYZE TABLE or (for MyISAM tables) myisamchk -a .

CARDINALITY is counted based on statistics stored as integers, so the value is not necessarily exact even for small tables. The higher the cardinality, the greater the chance that MySQL uses the index when doing joins.

The index prefix. That is, the number of indexed characters if the column is only partly indexed, NULL if the entire column is indexed.

Prefix limits are measured in bytes. However, prefix lengths for index specifications in CREATE TABLE , ALTER TABLE , and CREATE INDEX statements are interpreted as number of characters for nonbinary string types ( CHAR , VARCHAR , TEXT ) and number of bytes for binary string types ( BINARY , VARBINARY , BLOB ). Take this into account when specifying a prefix length for a nonbinary string column that uses a multibyte character set.

Indicates how the key is packed. NULL if it is not.

Contains YES if the column may contain NULL values and '' if not.

The index method used ( BTREE , FULLTEXT , HASH , RTREE ).

Information about the index not described in its own column, such as disabled if the index is disabled.

Any comment provided for the index with a COMMENT attribute when the index was created.

Whether the index is visible to the optimizer. See Section 8.3.12, “Invisible Indexes”.

MySQL 8.0.13 and higher supports functional key parts (see Functional Key Parts), which affects both the COLUMN_NAME and EXPRESSION columns:

For a nonfunctional key part, COLUMN_NAME indicates the column indexed by the key part and EXPRESSION is NULL .

For a functional key part, COLUMN_NAME column is NULL and EXPRESSION indicates the expression for the key part.

Notes

There is no standard INFORMATION_SCHEMA table for indexes. The MySQL column list is similar to what SQL Server 2000 returns for sp_statistics , except that QUALIFIER and OWNER are replaced with CATALOG and SCHEMA , respectively.

Information about table indexes is also available from the SHOW INDEX statement. See Section 13.7.7.22, “SHOW INDEX Statement”. The following statements are equivalent:


Watch the video: Νυχτερινό ψάρεμα στη θάλασσα με το γιο μου!


Comments:

  1. Stearn

    Hmm ... Nothing at all.

  2. Fiannan

    I disagree with those

  3. Kazraramar

    You are wrong. I'm sure. We need to discuss. Write to me in PM, it talks to you.

  4. Doukree

    Anything can happen, maybe your blog will rise in the Yandex rating for such a post. Let's see.



Write a message