JQuery LinkedIn Button With Share Content

Prerequisite :

  1. Read Documentation at developer.linkedin.com
  2. Create an application follow developer @ linkedin
  3. Once application is created successfully, save your Client ID & Client Secret at secure location
  4. Post comments if you have any question on Prerequisite. Let jump on to topic

 

Create a  Custom Login Button :

HTML:

<!DOCTYPE html>
<html>

<head>
<meta charset=’utf-8′ />
</head>

<body>

<button type=”button” class=”btn btn-lg btn-info linkedinShareButton”>
<i class=”icon icon-linkedin”></i>
</button>

</body>

</html>

Looks like below:

linkedin.JPG

Jquery:

//platform.linkedin.com/in.js

// Setup an event listener to make an API call once auth is complete
function onLinkedInLoad() {
shareContent();
}

// Handle the successful return from the API call
function onSuccess(data) {
console.log(“onSuccess”);
console.log(data);
}

// Handle an error response from the API call
function onError(error) {
console.log(“error”);
console.log(error);
}

// Use the API call wrapper to share content on LinkedIn
function shareContent() {
// Build the JSON payload containing the content to be shared
var payload = {
“comment”: “Check out developer.linkedin.com! http://linkd.in/1FC2PyG“,
“visibility”: {
“code”: “anyone”
}
};

IN.API.Raw(“/people/~/shares?format=json”)
.method(“POST”)
.body(JSON.stringify(payload))
.result(onSuccess)
.error(onError);
}

$(document).on(“click”, “.linkedinShareButton”, function() {
IN.UI.Authorize().place();
IN.Event.on(IN, “auth”, onLinkedInLoad);
});

That it, Now go and check your post on LinkedIn.com, wait for couple of min to see your post on wall.

Let me know if you have any comments/ Suggestions and Appreciations 😉

 

Configure geo_point in logstash for kibana Tile Map with MS-SQL

Pre-Requirements

  1. ElasticSearch.
  2. Logstash.
  3. Sense Plugin for ES.
  4. MS SQL JDBC Driver ( As this article is purely based on My SQL Query, We need JDBC Driver Installed and configured in Logstash config file. If you don’t know how to do it please read my other article MS-SQL CONFIG WITH LOGSTASH

In my example i am trying to find no of customers registered to my website or blog using their addresses with in US

Step 1: Create a simple query as below

SELECT CustomerID, Address, City, State, Postal_Code, Latitude, Longitude
FROM CustomerData with (nolock)

Data looks something like below

customerData.JPG

Save query as “customerInfo.sql” at desired location or to logstash bin directory. For my project i am going to save it under “c:\projectKibana\logstash\logstash 2.2.2\bin\”

Step 2 : Open Sense by running

  1. Open browser and run “http://localhost:5601/app/sense&#8221;
  2. Create an Index called customerdata (Note * : Index in ES are always lowercase)
PUT /customerdata

and click on Play button or (Crtl + Enter ) you should see some thing like below

{
 "acknowledged": true
}

3. Creating Mapping within sense for the index you just created as below

PUT customerdata/_mapping/dataforcustomer
{
 "properties": {
 "@timestamp": {
 "type": "date",
 "format": "strict_date_optional_time||epoch_millis"
 },
 "@version": {
 "type": "string"
 },
 "CustomerID": {
 "type": "long"
 },
 "Address":{
 "type": "string"
 },
 "City" : {
 "type": "string"
 },
 "State" :{
 "type": "string"
 },
 "Postal_Code" :{
 "type": "string"
 },
 "Latitude" : {
 "type": "float"
 },
 "Longitude" : {
 "type": "float"
 },
 "location": {
 "type": "geo_point"
 }
 }
}

 

 

Run it using Ctrl+Enter or using green play button…

What we are doing here is we are creating mapping for {customerdata} index using {_mapping} and we are creating a table with in index called {dataforcusomer}

later we are creating some property mapping based on our SQL data set, however if you notice i am also creating 3 different columns @timestamp, @version and location

@timestamp : this is useful to have, by this i know at what time and date records got pushed to ES from my conf file.

@version: version is useful to track any changes in your data, version will be updated when indexed data is updated.

location: This is where we are going to push our latitude and longitude data in a two-dimensional array. will take about it later in this article. This field type should be configured as “type” : “geo_point”, this is very important. Tile maps will work only when geo_point is configure, like we did here in this example. Make sure Field name is also named as location only in order to avoid any confusions.

Well we are done with step 2. However make sure your index and mapping are in place by running below command in sense.

GET customerdata

Data should look like below

{
 "customerdata": {
 "aliases": {},
 "mappings": {
 "dataforcustomer": {
 "properties": {
 "@timestamp": {
 "type": "date",
 "format": "strict_date_optional_time||epoch_millis"
 },
 "@version": {
 "type": "string"
 },
 "Address": {
 "type": "string"
 },
 "City": {
 "type": "string"
 },
 "CustomerID": {
 "type": "long"
 },
 "Latitude": {
 "type": "float"
 },
 "Longitude": {
 "type": "float"
 },
 "Postal_Code": {
 "type": "string"
 },
 "State": {
 "type": "string"
 },
 "location": {
 "type": "geo_point"
 }
 }
 }
 },
 "settings": {
 "index": {
 "creation_date": "1459289761727",
 "number_of_shards": "5",
 "number_of_replicas": "1",
 "uuid": "cq7xqNfeTsC-EeE-zZ-AyQ",
 "version": {
 "created": "2020199"
 }
 }
 },
 "warmers": {}
 }
}

Now lets move on to 3rd Step

Step 3: Configure Logstash conf file to fetch data from SQL and apply some filters

  1. Open note pad
  2. Save it as “cusomerinformation.conf” at logstash bin folder, for my example i am saving this file under  “c:\projectKibana\logstash\logstash 2.2.2\bin”
  3. Open “cusomerinformation.conf” and start editing. see below.
#JDBC Settings by Sharief Mohatad
#Created On 3/29/2016
input {
 jdbc {
 jdbc_driver_library => "C:\ProjectKibana\MicrosoftSQL\sqljdbc_6.0\enu\sqljdbc42.jar"
 jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
# Make sure you enter your ServerName: PortNo and Database name correctly 
jdbc_connection_string => "jdbc:sqlserver://MYServerName:1433;databaseName=MyDataBaseName"
# Make sure your enter your Username Correctly, If SQL is configure without authentication remove
# jdbc_user and jdbc_password
jdbc_user => "MySQLUserName"
 jdbc_password => "MySQLPassword"
# If you are saving your SQL file in bin path or same as conf path then below statement will work.
# or you should mention full path for your SQL statement under "statement_filepath "
# Or replace statement_filepath with statement=> "select * from table" for inline SQL
statement_filepath => "customerInfo.sql"
 jdbc_paging_enabled => "true"
 jdbc_page_size => "50000"
 }
}
filter {
# Below line is important in order to convert as lat and long information
# In my SQL i have field named latitude & longitude, you need to replace them based on your SQL input
mutate {
# Location and lat/lon should be used as is, this is as per logstash documentation
# Here we are tying to create a two-dimensional array in order to save data as per Logstash documentation
 add_field => { "[location][lat]" => [ "%{latitude}" ] }
 add_field => { "[location][lon]" => [ "%{longitude}" ] }
 convert => [ "[location]", "float" ]
 }
}
output {
 elasticsearch {
 hosts => "localhost:9200"
#Index or indices "customerdata" is the one we already create in Sense in STEP 2
 index => "customerdata"
#creating indexing on my identity column "customerid", you can change it as you like
 document_id => "%{customerid}"
# we created "dataforcustomer" in STEP 2 with in sense mapping 
 document_type => "dataforcustomer"
 manage_template => true
 }
 stdout { codec => rubydebug }
}

Once this is done save “cusomerinformation.conf” file.

Explanation:

In “cusomerinformation.conf” file we used filter in order to mutate latitude and longitude data to a custom mapping two-dimensional array called “Location” as per Logstash Document we have to use “lat” and “log” as shown in “cusomerinformation.conf” filter.

  • So all we did is created a new field called “location” ( In previous version on Logstash it is used as coordinated, but latest version we have to use “location” as per doc)
  • Create array with in location called “lat” & “log” (“lat”,”lon” field names are mandatory, we have to mention them as lat and log )
  • Pushed data to location.lat and location.log
  • converting “location” field type as “Float” ( this is per Logstash doc, which is mandatory )

or else we might end up with some issues. Well at least I had some issue 🙂

Step 4 : Run your confing file using command prompt ( CMD )

Now run config file “cusomerinformation.conf” from “bin” directory. as below per my example

c:\ProjectKibana\logstash\logstash 2.2.2\bin> logstash -f cusomerinformation.conf

if you don’t have any warning or error means all above steps are working fine and data should be available in Kibana………

Step 5: Open Kibana

  1. Run “http://localhost:5601/app/kibana&#8221;
  2. You should be defaulted to Settings -> Indices -> Configure an index pattern.
  3. Under  “Index name or pattern” enter “customerdata”
  4. You should see “Time-field name” and drop down should show @timestamp, Select @timestamp and click on Green button “Create”
  5. Once it is done it, page will redirect you to Cutomerdata with all field names, Now notice field Name ” Location ” data type will be “Geo_point”
  6. Now click on “discover” Tab. You should see all data. from here you can plug and play what fields you need for you visualization and save it. however i prefer new search for my viz.
  7. click on “Visualize” Tab and select “Tile maps”
  8. Click on new search and select your index. In this case it is “customerdata”
  9. Now by default “Metrics” show be “Count”
  10. under Buckets -> select Geo Coordinates.
  11. You should see some thing like below
    1. Aggregation -> Geohash
    2. Field -> location (the one we mutated in “cusomerinformation.conf” )
    3. Click on Play (Green) button
    4. with out any issues you should see your map with data.

hearmap

Hope this helps, Let me know if you have any questions or suggestions on this tutorial example. it worked fine without any issues for me.

 

MS-SQL config with Logstash

  1. Only way to connect MS-SQL (Any database) to a Logstash is thru a JDBC Driver Only (Document is valid only for Logstash 2.2.2).
  2. In this blog i am going to explain how to download from what and how to configure JDBC driver to Logstash.

Pre-requirements :

  1. ElasticSearch
  2. Logstash

Best PractisePlease read documentation before you do anything.

Lets dig-in:

  1. Download JDBC connector from Microsoft JDBC Drivers 6.0 (Preview), 4.2, 4.1, and 4.0 for SQL Server (If link is wrong, You can always google it.)
  2. scroll to the list and download correct file based on your system requirements. for this article sake i am downloading “sqljdbc_6.0.6629.101_enu.exe”
  3. Run exe file and safe it in your project location, see below for my example.
c:\ProjectKibana\MySqlDriver

     4.  Next step, So to your logstash folder

c:\ProjectKibana\Logstash\Logstash 2.2.2\bin

     5. Create a new conf file name JDBCConnector.Conf and Edit as below

input {
  jdbc {
   jdbc_driver_library => "C:\ProjectKibana\MySqlDriver\sqljdbc_6.0\enu\sqljdbc42.jar"
   jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
   jdbc_connection_string => "jdbc:sqlserver://YOUR SERVER NAME:1433;databaseName=YOUR DATABASE NAME"
   jdbc_user => "YOUR DATABASE USERNAME, IF YOU DONT HAVE ONE REMOVE jdbc_user"
   jdbc_password => "YOUR DATABASE PASSWORD, IF YOU DONT HAVE ONE REMOVE jdbc_password"
   statement => " select * from TABLE "
   jdbc_paging_enabled => "true"
   jdbc_page_size => "50000"
   }
 }
# IF you want to add Filter you can add one
# filter {
# .....
#}

output {
 elasticsearch {
 hosts => "localhost:9200"
 index => "Your ES IndexName"
 document_id => "%{Table_id}"
 document_type => "Any Name for your table in ES"
 manage_template => true
 }
 stdout { codec => rubydebug }
}

All Done…. You are good to go…. Now run your conf file  using command prompt.

c:\ProjectKibana\logstash\logstash 2.2.2\bin> logstash -f  JDBCConnector.Conf --debug

Useful information about JDBC :

If you want more information about JDBC itself Please read Building the Connection URL

 

Using Font Awesome as an Select Option

Technically Option tag is used only for Content Text, So it is not possible to use FontAwesome Icon directly into a Select Option.

 

However, FontAwesome is a Unicode font based icon-set, and so it is possible to include an icon in an option, or a select list, simply by referencing it’s unicode character [ Which Font-Awesome call it a cheatsheet] directly.

 

Simple use same code as it, and just change unicode as you wish from FontAwesome link provided above
<select class="form-control" style="font-family:'FontAwesome', Arial;">
<option value="1">[] Positive</option>
<option value="0">[] Negative</option>
</select>

Output :

select option fontawseme

Prevent Screen Rotation in PhoneGap / Android

You may have an app at some point that requires a set screen orientation. Using PhoneGap in Android this is done by editing your project’s AndroidManifest.xml. This file is located in your project root. Double click the file to open it within Eclipse and note the activity node:

Continue reading