Category Archives: mongo

Enabling access control on a MongoDB deployment enforces authentication, requiring users to identify themselves. When accessing a MongoDB deployment that has access control enabled, users can only perform actions as determined by their roles.

With access control enabled, ensure you have a user with userAdmin or userAdminAnyDatabase role in the admin database. This user can administrate users and roles such as: create users, grant or revoke roles from users, and create or modify customs roles.

You can create users either before or after enabling access control. If you enable access control before creating any user, MongoDB provides a localhost exception which allows you to create a user administrator in the admin database.

Once created, you must authenticate as the user administrator to create additional users as needed.

First we will look at starting the MongoDB server and client without access control:

Starting MongoDB Server:

Starting MongoDB client:

Note: If we want to provide any command line options, for example port number, host etc., we can provide. For more command line options check here or type mongod –help from command prompt to get options.

Creating User Administrator:

In the admin database, add a user with the userAdminAnyDatabase role. For example, the following creates the user myUserAdmin in the admin database

Start the MongoDB instance with access control:

Start the mongod instance with the –auth command line option or, if using a configuration file, the security.authorization setting.

Note: Clients that connect to this instance must now authenticate themselves as a MongoDB user. Clients can only perform actions as determined by their assigned roles.

Connect and authenticate as the user administrator:

Using the mongo shell, you can:

  • Connect with authentication by passing in user credentials, or
  • Connect first without authentication, and then issue the db.auth() method to authenticate.

To authenticate during connection:

Start a mongo shell with the -u <username>, -p <password>, and the –authenticationDatabase <database> command line options:

To authenticate after connecting:

Connect the mongo shell:

Switch to the authentication database (in this case, admin):

and use db.auth(<username>,<pwd>) method to authenticate as following:

Create additional users as needed for your deployment:

Once authenticated as the user administrator, we can create any number of users. Use db.createUser() to create additional users. You can assign any built-in roles or user-defined roles to the users.

The myUserAdmin user only has privileges to manage users and roles. As myUserAdmin, if you attempt to perform any other operations, such as read from a foo collection in the test database, MongoDB returns an error.

The following operation adds a user myTester to the test database who has readWrite role in the test database as well as read role in the reporting database.

We can connect to test database with the myTester user like as we discussed for admin database.

Note: The database where you create the user is the user’s authentication database. Although the user would authenticate to this database, the user can have roles in other databases; i.e. the user’s authentication database does not limit the user’s privileges.

Thank You 🙂

We should encrypt some sensitive properties like password in real time projects to avoid hacking.

Here we are using  jasypt-spring-boot dependency with spring boot project to encrypt properties and use those properties in code.

Below are the dependencies for different build tools:

Maven:

Gradle:

application.properties:

The example application.properties file in spring boot application look like as following

As you can see, spring.data.mongodb.password value is encrypted. But how was this generated?

We can generate it by using jasypt like below (:

Usage:

We can use spring.data.mongodb.password property in any spring component as like using any other properties.

Example:

Running the Application:

when running the application we should provide the value which is used to generate encrypted password i:e, myEncPwd.

 

Thank You 🙂

Here are some of the new feature introduced through Ruby 2.3:

Safe Navigation (&.):

A new operator (&.) has been introduced. It can be very useful in cases where you need to check if an object is nil before calling a method on it. It will return nil if the object equals nil, otherwise it calls the method on the object.

Immutable string:

Strings were mutable by default in Ruby. Immutable strings gives us improved performance because Ruby now has to allocate fewer objects. Ruby 2.3 allows you to optionally make all strings literals frozen by default. You can enable this by adding a comment frozen_string_literal: true at the start of the file.

squiggly heredoc(<<~):

Heredocs let us span strings over multiple lines

ActiveSupport gave us strip_heredoc, but with a seperate library (or Rails)

But now Ruby gives us THE SQUIGGLY HEREDOC

Enumerable#grep_v:

The grep_v method is equivalent to the -v option in the command line grep utility. It returns the list of items that do not match the condition.

Dig:

Dig method simplified accessing nested element in an array or a hash.

Array#dig 

Hashe#dig

Suggestions

When you get a NoMethodError because of a typo in the method name, Ruby now helpfully suggests other method names similar to that one.

Hash “comparison”

Now we can compare 2 hashes. If you see a >= b, it is checking if all the key-value pairs in b are also present in a.

In the first example above, the key-value pair [:x, 1] in the RHS is a subset of those in the LHS – [ [:x, 1], [:y, 2] ], so it returns true.

Hash#to_proc

Hash#to_proc returns a lambda that maps the key with the value. When you call the lambda with a key, it returns the corresponding value from the hash.

Hash#fetch_values:

This method works like Hash#values_at – it fetches the values corresponding to the list of keys we pass in. The difference is that #values_at returns nil when the key doesn’t exist, while #fetch_values raises a KeyError for keys that aren’t present.

 

Introduction:

RabbitMQ is a message broker. The principal idea is pretty simple: it accepts and forwards messages.

The core idea in the messaging model in RabbitMQ is that the producer never sends any messages directly to a queue. Actually, quite often the producer doesn’t even know if a message will be delivered to any queue at all.

Instead, the producer can only send messages to an exchange. An exchange is a very simple thing. On one side it receives messages from producers and the other side it pushes them to queues. The exchange must know exactly what to do with a message it receives. Should it be appended to a particular queue? Should it be appended to many queues? Or should it get discarded. The rules for that are defined by the exchange type.

exchanges1

 Note that the producer, consumer, and broker do not have to reside on the same machine; indeed in most applications they don’t.

Prerequisites:

This tutorial assumes RabbitMQ is installed and running on localhost on standard port (5672). In case you use a different host, port or credentials, connections settings would require adjusting.

Types of exchanges:

  • Direct: A direct exchange delivers messages to queues based on a message routing key. In a direct exchange, the message are routed to the queues whose binding key exactly matches the routing key of the message. A message goes to the queue(s) whose binding key exactly matches the routing key of the message. If the queue is bound to the exchange with the binding key emailprocess, a message published to the exchange with a routing key emailprocess will be routed to that queue.

    The default exchange AMQP brokers must provide for the topic exchange is “amq.direct”.

    diret_exchange2

                 The direct exchange type is useful when you would like to distinguish messages published to the same exchange using a simple string identifier.

  • Fanout: A fanout exchange routes messages to all of the queues that are bound to it.The fanout copies and routes a received message to all queues that are bound to it regardless of routing keys or pattern matching as with direct and topic exchanges. Keys provided will simply be ignored.

    The default exchange AMQP brokers must provide for the topic exchange is “amq.fanout”.

    fanout_exchange2       Fanout exchanges can be useful when the same message needs to be sent to one or more        queues with consumers who may process the same message in different ways.

  • Topic: 

    Topic exchanges route messages to queues based on wildcard matches between the routing key and something called the routing pattern specified by the queue binding. Messages are routed to one or many queues based on a matching between a message routing key and this pattern.

    cialis avant ou apres manger Messages sent to a topic exchange can’t have an arbitrary routing_key – it must be a list of words, delimited by dots. The words can be anything, but usually they specify some features connected to the message. A few valid routing key examples: “stock.usd.nyse”, “nyse.vmw”, “quick.orange.rabbit”. There can be as many words in the routing key as you like, up to the limit of 255 bytes.

    The binding key must also be in the same form. The logic behind the topic exchange is similar to a direct one – a message sent with a particular routing key will be delivered to all the queues that are bound with a matching binding key. However there are two important special cases for binding keys:

    * (star) can substitute for exactly one word.
    # (hash) can substitute for zero or more words.

    It’s easiest to explain this in an example:

    topic_exchage1

    We created three bindings: Q1 is bound with binding key “*.orange.*” and Q2 with “*.*.rabbit” and “lazy.#”. where can i buy cialis

    These bindings can be summarised as:
    Q1 is interested in all the orange animals.
    Q2 wants to hear everything about rabbits, and everything about lazy animals.
         
    The default exchange AMQP brokers must provide for the topic exchange is “amq.topic”.

    • You can find more programmatic examples here. Hope this article helps you to understand three major types of RabbitMQ Exchange.

      Thank you for reading this article!

It’s easy for us to store/ get data in mongoDB using nodejs through POST and GET Api methods.. when comes to store an image.. it’s different as of storing normal fields data.. while i was trying to store and read the image , it faced it bit tough..

So this post shows you the easy and understandable way of storing and getting the image from MongoDB using nodejs, Express, and Mongoose..

The way i am trying to store the image is like.. i am uploading and storing the image in my project directory(uploads) and using the image path to get store in Mongo collection.. while getting the image, getting the image path from mongoDB and referring it to get the exact image..

My project structure looks like :

node1

  • Uploads is the directory which stores images/files we upload..
  • routes has the class ‘imagefile.js’ which contains the logic of image/file store and get
  • views has the class ‘index,ejs’ which contains Html upload logic
  • app.js is the main class
  • package.json has the information about the packages installed

The code inside the files is as follows :

package.json


{
"name": "my-nodejs-api",
"version": "1.0.0",
"description": "simple api app",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" &amp;&amp; exit 1"
},
"dependencies": {
"express":"*",
"body-parser":"*",
"mongoose":"*",
"multer":"*",
"path":"*"
},
"author": "Kamran Athar",
"license": "ISC"
}

app.js 


var express = require('express');
var app = express();
var bodyParser = require('body-parser');
var mongoose = require('mongoose');
var path = require('path');
app.use(bodyParser.json());

//To get the access for the functions defined in index.js class
var routes = require('./routes/imagefile');

// connect to mongo,
//i have created mongo collection in mlab.com.. the below is my database access url..
//So make sure you give your connection details..
mongoose.connect('mongodb://nodejsapi:nodejsapi@ds041516.mlab.com:41516/mynodejsapp');

app.use('/', routes);

//URL : http://localhost:3000/images/
// To get all the images/files stored in MongoDB
app.get('/images', function(req, res) {
//calling the function from index.js class using routes object..
routes.getImages(function(err, genres) {
if (err) {
throw err;

}
res.json(genres);

});
});

// URL : http://localhost:3000/images/(give you collectionID)
// To get the single image/File using id from the MongoDB
app.get('/images/:id', function(req, res) {

//calling the function from index.js class using routes object..
routes.getImageById(req.params.id, function(err, genres) {
if (err) {
throw err;
}
//res.download(genres.path);
res.send(genres.path)
});
});

app.listen(3000);

console.log('Running on port 3000');

index.ejs


<html>
<head>
 <title>test</title>
</head>
<body>
<form action "/" method="POST" enctype="multipart/form-data">
 <input type="file" name="myimage" ></input>
 <input type="submit" name="submit" value="submit"></input>
</form>
</body>
</html>

imagefile.js


var express = require('express');
var router = express.Router();
var multer = require('multer');
var mongoose = require('mongoose');

//path and originalname are the fields stored in mongoDB
var imageSchema = mongoose.Schema({
 path: {
 type: String,
 required: true,
 trim: true
 },
 originalname: {
 type: String,
 required: true
 }

});


var Image = module.exports = mongoose.model('files', imageSchema);

router.getImages = function(callback, limit) {

 Image.find(callback).limit(limit);
}


router.getImageById = function(id, callback) {
 
 Image.findById(id, callback);

}

router.addImage = function(image, callback) {
 Image.create(image, callback);
}


// To get more info about 'multer'.. you can go through https://www.npmjs.com/package/multer..
var storage = multer.diskStorage({
 destination: function(req, file, cb) {
 cb(null, 'uploads/')
 },
 filename: function(req, file, cb) {
 cb(null, file.originalname);
 }
});

var upload = multer({
 storage: storage
});

router.get('/', function(req, res, next) {
 res.render('index.ejs');
});

router.post('/', upload.any(), function(req, res, next) {

 res.send(req.files);

/*req.files has the information regarding the file you are uploading...
from the total information, i am just using the path and the imageName to store in the mongo collection(table)
*/
 var path = req.files[0].path;
 var imageName = req.files[0].originalname;

 var imagepath = {};
 imagepath['path'] = path;
 imagepath['originalname'] = imageName;

 //imagepath contains two objects, path and the imageName

 //we are passing two objects in the addImage method.. which is defined above..
 router.addImage(imagepath, function(err) {

 });

});

module.exports = router;

URL's to use :
http://localhost:3000/   : Html page which shows the button to upload.. and submitting it store the image/ file in project directory and then image path in MongoDB
http://localhost:3000/images : to get All images/files from the MongoDB
http://localhost:3000/images/(mongo collection ID) : To get path of single image

Overview: Twilio Client enables you to make voice calls from your browser or native mobile applications. Twilio Client calls are made either through Adobe Flash or WebRTC. Since both plug-ins are not supported on most mobile web browsers, if you can’t make browser calls from the web browsers of your smartphone or tablet.

In this blog post we will see how to make outbound phone calls from the browser to a phone using Twilio . We will make use of the Twilio-JS library.

Setup Twilio Credentials and TwiML App:

Config Value Description
Account SID Your primary Twilio account identifier – find this in the console here.
Auth Token Used to authenticate – just like the above, you’ll find this here.
TwiML App SID The TwiML application with a voice URL configured to access your server running this app – create one in the console here. Also, you will need to configure the Voice “REQUEST URL” on the TwiML app once you’ve got your server up and running.
Twilio Phone # A Twilio phone number in E.164 format – you can get one here

When our app make a call from the browser using twilio-js client, Twilio first creates a new call connection from our Browser to Twilio. It then sends a request back to our server to get information about what to do next. We can respond by asking twilio to call a number, say something to the person after a call is connected, record a call etc.

Generate capability token from your servlet: See the below code snippet for creating the capability token from servlet.

Use the above token in javascript: Using the above token helps the twilio-js client determine, what permissions the application has like making calls, accepting calls, sending SMS, etc. See the below snippet code of javascript.

Collect the phone number from html/jsp: Collect the mobile number from html/jsp file. See the below snippet code of html.

And add the css of above html file. See the below snippet code of css.

TwilML Response generator from post servlet: Calls from our App is to handle callbacks from Twilio and return TwiML response. See the below code snippet of the servlet.

Define the web,xml: See the above two servlets configured in web.xml file. See the below snippet code.

Run your application:
— Run your application at http://localhost:8080
— Run ngrok: ngrok http 8080
— When ngrok starts up, it will assign a unique URL to your tunnel. It might be something like http://0a2de35d.ngrok.io. Take note of this.
— Configure your TwiML app’s Voice “REQUEST URL” to be your ngrok URL plus /voice. For example: http://0a2de35d.ngrok.io/TwilioTest/voice

TwilioMLApp

Output: When you run your application and give the phone number from input field, then browser make a call to given phone number.

TwilioMLOutput

Happy Calling 🙂

The blog is about to give insights related to the Performance Testing of a RESTful API.

For this we are using Jmeter as a testing tool here, owing to its attribute of being an open source and pure Java tool designed for the performance measurement of web applications . It  provides graphs and visualization techniques to analyze the results as well.

Before going into the deep, we have to know what is RESTful API.

What is RESTFUL API?

REST (Representational State Transfer) is a simple stateless architecture that uses HTTP protocol.

Nowadays, REST has become the most widely used model for web service implementation. So that clients and third party applications can access its resources using URI (Uniform Resource Identifier). It is lightweight, not strongly typed and unlike SOAP isn’t bound to only XML format. So, it has become a hot topic for QA’s as well, knowing how to test REST APIs adds some weight to your resume.

Why performance testing is necessary for application

  • To identify the maximum operating capacity of a system.
  • To identify any bottleneck which may occur in system operation and not in development.
  • To determine the speed or performance of application or system on heavy load.
  • To test robustness, availability and reliability under extreme condition.

The API methods are as follows:

Using JMeter, we will design the test that will

  • Use HTTP GET to retrieve a list of all items
  • Use HTTP POST to add a new item
  • Use HTTP PUT to update a newly added item
  • Use HTTP DELETE to delete the item added
Let’s create a JMeter test Plan as below.

In the steps given below we can understand how to configure and use JMeter for Performance testing of RESTful API.

1. Add “Test Plan” element.

Test_plan

You can rename the test plan name and set to your choice. Also we can add user defined variables to test plan for further use.

2. Add “Thread Group” element.

So First we need to add Thread Group to our test plan. Thread group means basically number of users we want to test with.

To add thread group right click on test plan go to Add –> Threads (Users) viagra femme forum users –> Thread Group, shown in below screenshot.

10-3-2016-5-11-21-PM-c7a1

Thread Group window will look like below.

10-3-2016-5-18-42-PM-5df6

Here in the Thread Group

a. Set the Thread Group name as your choice.

b. Set the number of threads means no. of concurrent users.

c. Set the Ramp-Up, it is the time interval in seconds to start the next thread after first thread (simply time intervals in threads).

d. Set the Loop count, it is the number of times this group thread will execute.

3.  Add “HTTP Request” Sampler element.

Now we need to add Http Request to out thread group. For that right click on Thread Group go to Add –> Sampler –> Http Request

Show in Below Screen shot.

10-3-2016-5-47-03-PM-ab52

HTTP request sampler window will look like below

10-3-2016-5-55-52-PM-bd0b

For the above request sampler also we can add name to the request, given name as (Seeker login) and we have to provider our Server Name or IP of the API and its port(if required) in the web server section.

The HTTP Request Section will be required to be filled as

Server Name or IP – Name of the server (e.g. www.uat2.carezen.net).

Method – Depending on the type of method, the API was build over select Get, Put, Post, Delete etc (in our demo we will be working on Post method).

Path – Application path after the API_URL (e.g. for API_URL www.uat2.carezen.net/platform/spi/auth/visitor/login’ the Path will be ‘/platform/spi/auth/visitor/login’

Parameters/Post Body – Add Request of the API body here (request body of the API in “Parameters” section by adding the parameters (refer to screenshot below for detail, or we can also give requests “Post body”on by selecting the “Body Data” section).

4. Add “HTTP Header Manager” Config element.

This is the REST API. We will pass data as a JSON object and for any API request we need API key(related to the device and the project to access) and a API auth token,  for that we need to add config element “Http Header Manager”. For this, right click HTTP Request and add Config Element → Http Header Manager. As shown in below screen shot.

10-3-2016-6-39-53-PM-d091

We can add the HTTP Header Manger to the Thread group aswell.

The HTTP Header Manager window will look like below.

10-3-2016-6-40-47-PM-4fca

To add new header click on Add and provide header name and value. These headers will be added to your all http request during execution of threads.

5.  Add “CSV Data Set Config” Config element.

In API’s we need to give inputs, there is a way in JMeter to provide inputs from csv file. You can create csv file of your inputs and provide this file to JMeter. JMeter will read your csv file line by line and provide this data to your http request.

So how to provide inputs from csv file, it’s pretty simple.

Create your csv file, I have created logindetails.csv file (Attached below). Now right click on thread group and go to Add –> Config Element –> CSV data set Config, as shown below.

10-3-2016-6-48-55-PM-2076

CSV data set config window will look like below.

10-3-2016-6-52-41-PM-b38a

So first name your csv config data set, then provide file name or path, but it is recommended that you place your csv file in directory where your jmx file is present. Encoding of file, and the most important are your variables. Name the variables in sequence as they are in csv files. I have taken only two variables username and password and my csv file data is also in same sequence.

If you want to use more than 1 csv data set config element in your thread group then it is possible but you need to make sure that the variable names should not repeat in your thread group.

Now to access the variable in your Http Request, you can simply access these variable using syntax: ${Variable_Name}.

so we pass the login username and password like below.

Username as ${Username}

Password as ${Password}

Have a look how I have modified my parameters in http request sampler to access input from csv file.

10-3-2016-6-57-47-PM-dde6

Http request parameters are using the CSV file input.

6.  Add “Response Assertion” element

To verify the item inserted, add Response Assertion to the HTTP Request Sampler . Response Assertion lets you add pattern strings to be compared against various fields of the response.

For this, right click on HTTP Request and add Assertion → Response Assertion. As shown in below screen shot.

10-3-2016-7-12-09-PM-40e2

For this, select the Text Response radio button for “Response Field to Test”. Set Pattern Matching Rules to “Contains” and add the item name in Patterns to Test.

10-3-2016-7-15-35-PM-586f

Please note that if the assertion fails, the test will stop and subsequent requests will be executed.

7.  Add “User Defined Variables” to the total Test Plan.

We can also use the user defined variables to the entire test by adding “User Defined Variable” config element.

For this, right click on Test Plan and add Config Elements→ User defined variables. As shown in below screen shot.

10-3-2016-7-27-51-PM-2325

By using this “user defined variable” config element, no need of passing the “server Name” or IP every time in the HTTP Request, in the user defined variable if we add the server name with the variable name, and that variable name need to pass in the HTTP Request sampler, we can add any no. of variables.

Below is the User Defined variables.

10-3-2016-7-34-39-PM-4c56

8. Listeners

Listeners are the elements where you actually see the results of your API test. There are various listeners available with JMeter. But for API we actually need View Result Tree Listener. To add listener right click on thread group go to Add –> Listener –> Listener

View Result Tree Listener:

With the View Result Tree Listener we can see the input provided and output of each request sent to server

Have a look at View Result In Tree Window. In left side of window you will see the APIs executed if they are in green color then API executed successfully and if they are in red color then API execution had some problem, you can see the reason in the right frame of window.

In Sampler result tab you can see all the details of http response by server. In Request tab you can see http url hit and data posted with the request, and in response tab you can see the response data of request. All these are shown in below screen shots.

a. Sampler Result Tab

10-3-2016-7-48-56-PM-e22c

View Result Tree Listener Sampler Result Tab

b. Request tab

10-3-2016-7-52-16-PM-dfab

View Result Tree Listener Request Tab

c. Response tab

10-3-2016-7-52-32-PM-0e54

View Result Tree Listener Response Tab

In this way, we can do the performance as well as functional testing of a RESTful API.

By increasing the number of threads and loop count, we can increase the load on the server and measure various vitals of the server and API, such as CPU Utilization and average response time.

You can download  csv file of my loginApi for reference from here Seeker_login_details

If we need a CSV file with huge

data in millions to use it for Load Testing, Performance Testing etc. we can generate it by using the below script.

—————————————————————————————————-

#!/usr/bin/python
import csv
import random

i=0

with open(‘test.csv’, ‘w’) as csvfile:
a = csv.writer(csvfile, delimiter=’,’)
while i<1000000000:
phone = random.randint(100000000000, 999999999999)
id = random.randint(100000000, 999999999)
data= [phone, id]
a.writerow(data)
i=i+1

——————————————————————————————————-

The above script generates a CSV file with two columns ‘phone’ and ‘id’. the phone is a randomly generated 12 digit number and id is 9 digit number. The generated file name will be ‘test.csv’.  Comma (,) is the delimeter for the two records. ‘i’ will be used as increment until the given limit is reached. Based on the ‘i’ value the file length varies. if we want 10M records then the ‘i’ value should be 10M. If we want to add an additional column, it should be added to ‘data’.

 

This is an example of how to implement pagination in AngularJS with Grails Rest api as backend.

If you only want this example , you can have the project on Github:
Angular Client: https://github.com/sandeep-purma/angular_app
Grails Rest API: https://github.com/sandeep-purma/grails-rest-example

If you are interested to discover how I did to integrate Angular with Grails Rest api and handled totalCount, offset for pagination, here are the steps.

Before going to look at the actual steps, let us look at how we implement pagination in our Grails web application without using rest api.

To  implement pagination for a view, we will make use of the gsp tag- ‘<g:paginate>’ , this tag sends and receives specific parameters that tell GORM list methods how many instances to bring back (max), what set of instances to pick out from the result set (offset), and the overall size of the result set(totalCount).

So, a typical gsp view contains the following logic to show paginated results

<g:paginate controller="Book" action="list"  total="${bookInstanceTotal}" />

And controller will have the following logic

def booksList = Book.list(max:maxRows, offset:rowOffset) { }
log.debug "Total records: $booksList.totalCount"
render view:'list', model: [list:booksList, bookInstanceTotal: booksList.totalCount]

if we look at above logic briefly, it tells that we are querying DB for Book instances by using list() method on Criteria, which takes the pagination parameters (max and offset), returns an instance of ‘grails.orm.PagedResultSet’ , which provides a handy totalCount() method that gives us total row count.

So, now if we want to make our Grails app offer a RESTful API , and build an AngularJS client that uses the api. Then the gsps are no longer needed.

We replace the gsp views with Angular views , So to implement the pagination we need an equivalent of the gsp tag <gsp:pagination> at angular side.

That is, we show pager in view by using an angularJS directive and for each pager button click , we update and send the pagination parameters (max, offset) from angular controller to grails rest api which queries the DB and returns a JSON as response with list of instances and totalCount.

<ul uib-pagination total-items="totalItems" ng-model="$parent.currentPage" ng-change="pageChanged()" class="pagination-sm pull-right" items-per-page="itemsPerPage" max-size="maxSize" previous-text="&lsaquo;" next-text="&rsaquo;" ></ul>

 

$scope.totalItems = 0;
$scope.currentPage = 1;
$scope.itemsPerPage = 5; //max items per page
$scope.offset = 0;
$scope.maxSize = 4; //Number of pager buttons to show
$scope.getAllContacts  = function(){
//get the data list and totalCount of data items to paginate
$scope.totalItems = totalCount
..........................
}
$scope.pageChanged = function() {
//set the offset on click of pager button
$scope.offset = ($scope.currentPage-1)*$scope.itemsPerPage;
$scope.getAllContacts();
};

 

Set up grails rest API :

Step 1: Create a grails rest application, configure plugins and datasource.

Step 2: Create domain class.

package com.test.myapp

class Contact {
String name]
String location
String phoneNumber
Date createdOn
Date updatedOn
}

Step 3: Create a restful controller  to expose the api to get the data.

package com.test.myapp.api

import grails.rest.RestfulController
import com.test.myapp.RestResponse
import grails.rest.RestfulController
import com.test.myapp.Contact

class RestContactController extends RestfulController {
static responseFormats = ['json', 'xml']
static namespace = 'v0'

def list(ContactListCommand cmd) {
def max = cmd.max ?: 10;
def offset = cmd.offset ?: 0
def contactList = Contact.createCriteria().list(max:max, offset:offset){
order("id","asc")
}
def resultsMap = [:]
resultsMap.put("contacts", contactList)
resultsMap.put("totalCount",contactList.totalCount)
respond new RestResponse(status: RestResponse.SUCCESS, type: RestResponse.SUCCESS, message: "Notes list.", results: [resultsMap])
}
}

class ContactListCommand {

Long contactId
Long offset
Long max

def getContact() {
Contact.get(contactId)
}

static constraints = {
contactId nullable: true
offset nullable: true
max nullable:true
}
}

Step 4: Modify the URL mappings

class UrlMappings {
static mappings = {
"/api/$namespace/contacts"(controller:'restContact') {
action = [GET: 'list']
}
"/console" (controller: 'console')
"/console/$action" (controller: 'console')
"/$controller/$action?/$id?(.$format)?"{
constraints {
// apply constraints here
}
}

"/"(view:"/index")
"500"(view:'/error')
}
}

Step 5: Run the app

gradle run

Set up Angular client:

Step 1:  Generate a simple angular js application using  yo angular generator

yoeman

Step 2:  Modify ‘app.js’ Inject the ‘ui-bootstrap’ dependency to application module to work with angular pagination directive.

ui_bootstrap_dependency

Step 3: Create a view to show the pagination and results


<div class="container">

<div class="row">

<div class="col-md-6 col-md-offset-3">

<div class="panel panel-primary">

<div class="panel-heading">

<span class="glyphicon glyphicon-list"></span>Contacts

<div class="pull-right action-buttons">

<div class="btn-group pull-right">

<button type="button" class="btn btn-default btn-xs dropdown-toggle" data-toggle="dropdown">

<span class="glyphicon glyphicon-cog" style="margin-right: 0px;"></span>

</button>

<ul class="dropdown-menu slidedown">

<li><a href="#"><span class="glyphicon glyphicon-trash"></span>Delete All</a></li>

</ul>

</div>

</div>

</div>

<div class="panel-body">

<ul class="list-group">

<li class="list-group-item contact-list" ng-repeat="contact in contactsList">

<div class="contact-info">

<div>

<div class="user-avatar">

<img src="../images/user.png"/>

</div>

<div class="contact-name">

<span>{{contact.name}}</span><br>

<p class="contact-details">

<small><span><i class="glyphicon glyphicon-phone-alt text-success"/>{{contact.phoneNumber}}</span>

<span> <i class="glyphicon glyphicon-map-marker text-success"/>{{contact.location}}

</span>

</small>

</p>

</div>

</div>

</div>

<div class="pull-right action-buttons">

<a href="#"><span class="glyphicon glyphicon-pencil"></span></a>

<a href="#" class="trash"><span class="glyphicon glyphicon-trash"></span></a>

</div>

</li>

</ul>

</div>

<div class="panel-footer">

<div class="row">

<div class="col-md-6">

<h6>Total Count: <span class="badge">{{totalItems}}</span></h6>

</div>

<div class="col-md-6" ng-if="contactsList.length > 1">

<ul uib-pagination total-items="totalItems" ng-model="$parent.currentPage" ng-change="pageChanged()" class="pagination-sm pull-right" items-per-page="itemsPerPage" max-size="maxSize" previous-text="&lsaquo;" next-text="&rsaquo;" ></ul>

</div>

</div>

</div>

</div>

</div>

</div>

</div>

Step 4: Create a service to make call to the api to get the data


'use strict';

var appServices = angular.module('angularApp')

appServices.factory('contactsService', function(contactsFactory, $q) {

var service = {};

service.contactsList = null;

service.getContactsList = function(offset,itemsPerPage) {

var deferred = $q.defer();

contactsFactory.contacts().list({

'offset':offset,

'max':itemsPerPage

}).$promise.then(function(response) {

if (response.results){

service.contactsList = response.results;

}

else{

service.contactsList = [];

}

deferred.resolve();

}, function(response) {

deferred.reject(response.status);

});

return deferred.promise;

};

return service;

})

.factory('contactsFactory', ['$resource', 'AppConfig', function($resource, AppConfig) {

var factory = {};

factory.contacts = function() {

return $resource(AppConfig.API_URL + 'contacts/:id', {

id:'@id'

}, {

'list': {

method: 'GET'

}

});

};

return factory;

}]);

Step 5: Create a controller to use the service to get the data to paginate and to handle the pagination directive, and offset.


'use strict';

angular.module('angularApp')

.controller('MainCtrl', ['$scope','contactsService', function($scope, contactsService) {

$scope.contactsList = [];

$scope.totalItems = 0;

$scope.currentPage = 1;

$scope.itemsPerPage = 5; //max items per page

$scope.offset = 0;

$scope.maxSize = 4; //Number of pager buttons to show


$scope.getAllContacts = function() {

contactsService.getContactsList($scope.offset,$scope.itemsPerPage).then(function(success) {

$scope.contactsList = contactsService.contactsList[0].contacts;

console.log($scope.contactsList);

$scope.totalItems = contactsService.contactsList[0].totalCount;

}, function() {

console.log("error getting notes list");

});

};

$scope.pageChanged = function() {

//set the offset on click of pager button

$scope.offset = ($scope.currentPage-1)*$scope.itemsPerPage;

$scope.getAllContacts();

};


$scope.init = function() {

$scope.getAllContacts();

};


$scope.init();

}]);

Step 6: Run the app


grunt serve

Step 7: browse the app from browser and hit the pager buttons to see the paginated results.

9-25-2016-3-37-58-PM-0ad7

Node.js driver is a pretty low level and it uses callbacks instead of promises.

Need of Wrapper Module
1) Replacing callbacks with promises
2) Simplified execute method that doesn’t require getting and releasing connections.

I wrote a sample application that uses such a wrapper module that implements all of the functionality listed above

Creating the module

The wrapper module, database.js, uses the es6-promise module to expose all of the major methods of the driver classes via promises. The getConnection and releaseConnection methods ensure that any buildup and teardown scripts are executed. Finally, I highlighted the simpleExecute function as it does the most work, exposing a promise based method that handles getting and releasing a connection as well as running buildup and teardown scripts.

var oracledb = require('oracledb');
var Promise = require('es6-promise').Promise;
var async = require('async');
var pool;
var buildupScripts = [];
var teardownScripts = [];

module.exports.OBJECT = oracledb.OBJECT;

function createPool(config) {
    return new Promise(function(resolve, reject) {
        oracledb.createPool(
            config,
            function(err, p) {
                if (err) {
                    return reject(err);
                }

                pool = p;

                resolve(pool);
            }
        );
    });
}

module.exports.createPool = createPool;

function terminatePool() {
    return new Promise(function(resolve, reject) {
        if (pool) {
            pool.terminate(function(err) {
                if (err) {
                    return reject(err);
                }

                resolve();
            });
        } else {
            resolve();
        }
    });
}

module.exports.terminatePool = terminatePool;

function getPool() {
    return pool;
}

module.exports.getPool = getPool;

function addBuildupSql(statement) {
    var stmt = {
        sql: statement.sql,
        binds: statement.binds || {},
        options: statement.options || {}
    };

    buildupScripts.push(stmt);
}

module.exports.addBuildupSql = addBuildupSql;

function addTeardownSql(statement) {
    var stmt = {
        sql: statement.sql,
        binds: statement.binds || {},
        options: statement.options || {}
    };

    teardownScripts.push(stmt);
}

module.exports.addTeardownSql = addTeardownSql;

function getConnection() {
    return new Promise(function(resolve, reject) {
        pool.getConnection(function(err, connection) {
            if (err) {
                return reject(err);
            }

            async.eachSeries(
                buildupScripts,
                function(statement, callback) {
                    connection.execute(statement.sql, statement.binds, statement.options, function(err) {
                        callback(err);
                    });
                },
                function (err) {
                    if (err) {
                        return reject(err);
                    }

                    resolve(connection);
                }
            );
        });
    });
}

module.exports.getConnection = getConnection;

function execute(sql, bindParams, options, connection) {
    return new Promise(function(resolve, reject) {
        connection.execute(sql, bindParams, options, function(err, results) {
            if (err) {
                return reject(err);
            }

            resolve(results);
        });
    });
}

module.exports.execute = execute;

function releaseConnection(connection) {
    async.eachSeries(
        teardownScripts,
        function(statement, callback) {
            connection.execute(statement.sql, statement.binds, statement.options, function(err) {
                callback(err);
            });
        },
        function (err) {
            if (err) {
                console.error(err); //don't return as we still need to release the connection
            }

            connection.release(function(err) {
                if (err) {
                    console.error(err);
                }
            });
        }
    );
}

module.exports.releaseConnection = releaseConnection;

function simpleExecute(sql, bindParams, options) {
    options.isAutoCommit = true;

    return new Promise(function(resolve, reject) {
        getConnection()
            .then(function(connection){
                execute(sql, bindParams, options, connection)
                    .then(function(results) {
                        resolve(results);

                        process.nextTick(function() {
                            releaseConnection(connection);
                        });
                    })
                    .catch(function(err) {
                        reject(err);

                        process.nextTick(function() {
                            releaseConnection(connection);
                        });
                    });
            })
            .catch(function(err) {
                reject(err);
            });
    });
}

module.exports.simpleExecute = simpleExecute;

In the Coming Post, will show how to use this developed wrapper module in our code.