Showing posts with label rest. Show all posts
Showing posts with label rest. Show all posts

Wednesday, May 07, 2014

My first Netty-based server

Netty logo
 

For our rhq-metrics project (and actually for RHQ proper) I wanted to be able to receive syslog messages, parse them and in case of content in the form of


type=metric thread.count=11 thread.active=1 heap.permgen.size=20040000

forward the individual key-value pairs (e.g. thread.count=11) to a rest endpoint for further processing.

After I started with the usual plain Java ServerSocket.. code I realized that this may become cumbersome and that this would be the perfect opportunity to check out Netty. Luckily I already had an Early Access copy of Netty in Action lying around and was ready to start.

Writing the first server that listens on TCP port 5140 to receive messages was easy and modifying /etc/syslog.conf to forward messages to that port as well.

Unfortunately I did not get any message.

I turned out that on OS/X syslog message forwarding is only UDP. So I sat down, converted the code to use UDP and was sort of happy. I was also able to create the backend connection to the REST server, but here I was missing the piece on how to "forward" the incoming data to the rest-client connection.

Luckily my colleague Norman Maurer, one of the Netty gurus helped me a lot and so I got that going.

After this worked, I continued and added a TCP server too, so that it is now possible to receive messages over TCP and UDP.

The (current) final setup looks like this:

Overviw of my Netty app

I need two different decoders here, as for TCP the (payload) data received is directly contained in a ByteBuf while for UDP it is inside a DatagramPacker and needs to be pulled out into a ByteBuf for further decoding.

One larger issue I had during development (aside from wrapping my head around "The Netty Way" of doing things) was that I wrote a Decoder like


public class UdpSyslogEventDecoder extends
MessageToMessageDecoder<DatagramPacket>{

protected void decode(ChannelHandlerContext ctx, DatagramPacket msg, List<Object> out) throws Exception {

and it never got called by Netty. I could see that it was instantiated when the pipeline was set up, but for incoming messages it was just bypassed.

After I added a method


public boolean acceptInboundMessage(Object msg) throws Exception {
return true;
}

my decoder was called, but Netty was throwing ClassCastExceptions, but they were pretty helpful, as it turned out that the IDE automatically imported java.net.DatagramPacket, which is not what Netty wants here. As Netty checks for the signature of the decode() and this did not match, Netty just ignored the decoder.

You can find the source of this server on GitHub.

Thursday, May 23, 2013

Support for paging in the RHQ REST-api (updated)

I have just pushed some changes to RHQ master that implement paging for (most) of the REST-endpoints that return collections of data.



If you remember, I was asking the other day if there is a "standard" way to do this. Basically there are two
"big options":
  1. Put paging info in the Http-Header
  2. Put paging info in the body of the message


While I think paging information are meta data that should not "pollute" the body, I understand the arguments from the JavaScript side that says that they don't easily have access to the http headers
within AJAX requests. So what I have now done is to implement both:
  1. Paging in the http header: this is the standard way that you get if you just request the "normal" media types of application/xml or application/json (output from running RestAssured):

    [update]
    My colleague Libor pointed out that the links do not match with format from RFC 5988 Web Linking, which is now fixed.
    [/update]

    Request method: GET
    Request path: http://localhost:7080/rest/resource
    Query params: page=1
    ps=2
    category=service
    Headers: Content-Type=*/*
    Accept=application/json
    HTTP/1.1 200 OK
    Server=Apache-Coyote/1.1
    Pragma=No-cache
    Cache-Control=no-cache
    Expires=Thu, 01 Jan 1970 01:00:00 CET
    Link=<http://localhost:7080/rest/resource?ps=2&category=service&page=2>; rel="next"
    Link=<http://localhost:7080/rest/resource?ps=2&category=service&page=0>; rel="prev"
    Link=<http://localhost:7080/rest/resource?ps=2&category=service&page=152>; rel="last"
    Link=<http://localhost:7080/rest/resource?page=1&ps=2&category=service>; rel="current"
    Content-Encoding=gzip
    X-collection-size=305
    Content-Type=application/json
    Transfer-Encoding=chunked
    Date=Thu, 23 May 2013 07:57:38 GMT

    [
    {
    "resourceName": "/",
    "resourceId": 10041,
    "typeName": "File System",
    "typeId": 10013,
    …..
  2. Paging as part of the body - there the "real collection" is wrapped inside an object that also contains paging meta data as well as the paging links. To request this representation, a media type of application/vnd.rhq.wrapped+json needs to be used (and this is only available with JSON at the moment):

    Request method: GET
    Request path: http://localhost:7080/rest/resource
    Query params: page=1
    ps=2
    category=service
    Headers: Content-Type=*/*
    Accept=application/vnd.rhq.wrapped+json
    Cookies:
    Body:
    HTTP/1.1 200 OK
    Server=Apache-Coyote/1.1
    Pragma=No-cache
    Cache-Control=no-cache
    Expires=Thu, 01 Jan 1970 01:00:00 CET
    Content-Encoding=gzip
    Content-Type=application/vnd.rhq.wrapped+json
    Transfer-Encoding=chunked
    Date=Thu, 23 May 2013 07:57:40 GMT

    {
    "data": [
    {
    "resourceName": "/",
    "resourceId": 10041,
    "typeName": "File System",
    "typeId": 10013,

    ],
    "pageSize": 2,
    "currentPage": 1,
    "lastPage": 152,
    "totalSize": 305,
    "links": [
    {
    "next": {
    "href": "http://localhost:7080/rest/resource?ps=2&category=service&page=2"
    }
    },

    }

Please try this if you can. We want to get that into a "finished" state (as for the whole REST-Api) for RHQ 4.8

Thursday, March 07, 2013

Sorry Miss Jackson -- or how I loved to do custom Json serialization in AS7 with RestEasy

Who doesn't remember the awesome "Sorry Miss Jackson" video ?

Actually that doesn't have to do anything with what I am talking here -- except that RestEasy inside JBossAS7 is internally using the Jackson json processing library. But let me start from the beginning.

Inside the RHQ REST api we have exposed links (in json representation) like this:

{ 
"rel":"self",
"href":"http://...
}

which is the natural format when a Pojo

class Link {
String rel;
String href;
}

is serialized (with the Jackson provider in RestEasy).

Now while this is pretty cool, there is the disadvantage that if you need the link for a rel of 'edit', you had to load the list of links, iterate over them and check for each link if its rel is 'edit' and then take the value of the href.

And there is this Bugzilla entry.

I started investigating this and thought, I will use a MessageBodyWriter and all is well. Unfortunately MessageBodyWriters do not work recursively and thus cannot format Link objects as part of another object in a special way.

Luckily I have done custom serialization with Jackson in the as7-plugin before, so I tried this, but the Serializer was not even instantiated. More trying and fiddling and a question on StackOverflow led me to a solution by copying the jackson libraries (core, mapper and jaxrs) into the lib directory of the RHQ ear and then all of a sudden the new serialization worked. The output is now

{ "self":
{ "href" : "http://..." }
}


So far so nice.

Now the final riddle was how to use the jackson libraries that are already in user by the resteasy-jackson-provider. And this was solved by adding respective entries to jboss-deployment-structure.xml, which ends up in the META-INF directory of the rhq.ear directory.

The content in this jboss-deployment-structure.xml then looks like this (abbreviated):


<sub-deployment name="rhq-enterprise-server-ejb3.jar">
<dependencies>
....
<module name="org.codehaus.jackson.jackson-core-asl" export="true"/>
<module name="org.codehaus.jackson.jackson-jaxrs" export="true"/>
<module name="org.codehaus.jackson.jackson-mapper-asl" export="true"/>
</dependencies>
</sub-deployment>


AS7 is still complaining a bit at startup:


16:48:58,375 WARN [org.jboss.as.dependency.private] (MSC service thread 1-3) JBAS018567: Deployment "deployment.rhq.ear.rhq-enterprise-server-ejb3.jar" is using a private module ("org.codehaus.jackson.jackson-core-asl:main") which may be changed or removed in future versions without notice.


but for the moment the result is good enough - and as we do not regularly change the underlying container for RHQ, this is tolerable.

And of course I would be interested in an "official" solution to that -- other than just doubling those jars into my application (and actually I think this may even complicate the situation, as (re-)using the jackson jars that RestEasy inside as7 uses, guarantees that the version is compatible).

Tuesday, February 19, 2013

Best practice for paging in RESTful APIS (updated)?

In the RHQ-REST api, we return individual objects and also collections of objects. Some of those collections are rather small (number of platforms), while others can grow a lot (number of resources in total or number of alerts fired). For the latter it is advised to do some paging and not return the full result set in one go. Some of the reasons are:
  • Memory consumption in server and client
  • Bandwidth need to transfer the data.
  • Latency to transfer huge amounts of data over slow networks


Inside RHQ we have the concept of a PageList<?> where an internal PageControl object defines the page size and other criteria like sorting. The PageList then only contains the objects from a certain page. I think this is a pretty common setup.

And here is where my question comes:

What is the "best-practice" to represent such a PageList in a RESTful api? So far I have seen two major ways:
  1. Add a Link: header that contains the prev and next relations. This is what RFC 5988 suggests and what projects like AeroGear use. The advantage here is that the body still contains the "raw" data and not meta data. And for both cases of 'single' object and 'collection' the data is at the 'root' of the body. Also paging is available for HEAD requests.

    On the other hand, it may get a bit harder for some client code (JavaScript, jQuery) to access the header and make use of the paging links
  2. Put the prev and next relations in the body of the request next to the collection. This has the advantage that there is no need to parse the http header. Disadvantage is that the real payload is now shifted "one level down" for collections.


    I sort of see the paging links as meta-data and think that this should not be mixed with the payload. Now a colleague of mine said: "Isn't that a state change link for the collection like the 'rel=edit' for a single object?". This sounds odd, but can't be denied.

Actually I have also seen mentioning the use of cookies to send the paging information, but that looks very non-transparent to me, so I am not considering this at all.

Just to be clear: I am explicitly talking about paging of collections and not about affordances of individual objects.

So are there established best practices? How do others do it?

If going for the Link: header: would people rather like to see multiple Link headers (see RFC 2616), one for each relation:

Link: <http://foo/?page=3>; rel='next'
Link: <http://foo/?page=1>; rel='prev'

or rather the combined way:

Link: <http://foo/?page=3>; rel='next', Link: <http://foo/?page=1>; rel='prev'

that is listed in RFC 5988?

[update]

I just saw that URLConnection.getHeaderField(name) does not support the multiple Link: headers as it only returns the last occurrence:

If called on a connection that sets the same header multiple times with possibly different values, only the last value is returned.


While there may be other ways to access all the Link: headers, this is a too obvious pitfall, that can be prevented by not using that style.

Wednesday, January 23, 2013

Monitoring the monster

RHQ-logoJDF logo

The classical RHQ setup assumes an agent and agent plugins being present on a machine ("platform" in RHQ-speak). Plugins then communicate with the managed resource (e.g. an AS7 server); ask it for metric values or run operations (e.g. "reboot").

This article shows an alternative way for monitoring applications at the example of the Ticket Monster application from the JBoss Developer Framework.


The communication protocol between the plugin and the managed resource is dependent on the capabilities of that resource. If the resource is a java process, JMX is often used. In the case of JBoss AS 7, we use the DMR over http protocol. For other kinds of resources this could also be file access or jdbc in case of databases. The next picture shows this setup.

RHQ classic setup


The agent plugin talks to the managed resource and pulls the metrics from it. The agent collects the metrics from multiple resources and then sends them as batches to the RHQ server, where it is stored, processed for alerting and can be viewed in the UI or retrieved via CLI and REST-api.

Extending

The above scenario is of course not limited to infrastructure and can also be used to monitor applications that sit inside e.g. AS7. You can write a plugin that uses the underlying connection to talk to the resource and gather statistics from there (if you build on top of the jmx or as7 plugin, you don't necessarily need to write java-code).

This also means that you need to add hooks to your application that export metrics and make them available in the management model (the MBean-Server in classical jre; the DMR model in AS7), so that the plugin can retrieve them.

Pushing from the application

Another way to monitor application data is to have the application push data to the RHQ-server directly. In this case you still need to have a plugin descriptor in order to define the meta data in the RHQ server (what kinds of resources and metrics exist, what units do the metrics have etc.). In this case you only need to define the descriptor and don't write java code for the plugin. This works by inheriting from the No-op plugin. In addition to that, you can also just deploy that descriptor as jar-less plugin.

The next graphic shows the setup:

RHQ with push from TicketMonster


In this scenario you can still have an agent with plugins on the platform, but this is not required (but recommended for basic infrastructure monitoring). On the server side we deploy the ticket-monster plugin descriptor.

The TicketMonster application has been augmented to push each booking to the RHQ server as two metrics for total number of tickets sold and the total price of the booking (BookingService.createBooking()).


@Stateless
public class BookingService extends BaseEntityService<Booking> {

@Inject
private RhqClient rhqClient;

public Response createBooking(BookingRequest bookingRequest) {
[…]
// Persist the booking, including cascaded relationships
booking.setPerformance(performance);
booking.setCancellationCode("abc");
getEntityManager().persist(booking);
newBookingEvent.fire(booking);
rhqClient.reportBooking(booking);
return Response.ok().entity(booking)
.type(MediaType.APPLICATION_JSON_TYPE).build();


This push happens over a http connection to the REST-api of the RHQ server, which is defined inside the RhqClient singleton bean.

In this RhqClient bean we read the rhq.properties file on startup to determine if there should be any reporting at all and how to access the server. If reporting is enabled we try to find the platform we are running on and if the RHQ server does not know about it, create it. On top of the platform we create the TicketMonster instance. This is safe to do multiple times, as would be the platform creation - I am looking for an existing platform where an agent might perhaps already monitor the basic data like cpu usage or disk utilization.

The reporting of the metrics then looks like this:


@Asynchronous
public void reportBooking(Booking booking) {

if (reportTo && initialized) {
List<Metric> metrics = new ArrayList<Metric>(2);

Metric m = new Metric("tickets",
System.currentTimeMillis(),
(double) booking.getTickets().size());
metrics.add(m);

m = new Metric("price",
System.currentTimeMillis(),
(double) booking.getTotalTicketPrice());
metrics.add(m);

sendMetrics(metrics, ticketMonsterServerId);
}
}


Basically we construct two Metric objects and then send them to the RHQ-Server. The second parameter is the resource id of the TicketMonster server resource in the RHQ-server, which we have obtained from the create-request I've mentioned above.

A difference to the classical setup where the MeasurementData objects inside RHQ always have a so called schedule id associated is that in the above case we pass the metric name as is appears in the deployment descriptor and let the RHQ server sort out the schedule id.


<metric property="tickets" dataType="measurement"
displayType="summary" description="Total number tickets sold"/>
<metric property="price"
displayType="summary" description="Total selling price"/>


And voilĂ  this what the sales look like that are created from the Bot inside TicketMonster:

Bildschirmfoto 2013 01 23 um 11 16 23
Bookings in the RHQ-UI


The display interval has been set to "last 12 minutes". If you see a bar, that means that within the timeslot of 12mins/60 slots = 12sec, there were multiple bookings. In this case the bar shows the max and min value, while the little dot inside shows the average (via Rest-Api it is still possible to see the individual values for the last 7 days).

Why would I want to do that?

The question here is of course why would I want to send my business metrics to the RHQ server, that is normally used for my infrastructure monitoring?

Because we can! :-)

Seriously such business metrics are also able to indicate issues. If e.g. the number of ticket bookings is unusually high or low, this can also be a source of concern and warrants an alert. Take the example of E-Shops that sell electronics and where it happened that someone made a typo and offered laptops that are normally sold at €1300 and are now selling at €130. That news is quickly spread via social networks and sales triple over their normal numbers. Here monitoring the number of laptops sold can be helpful.

The other reason is that RHQ with its user concept allows to set up special users that only have (read) access to the TicketMonster resources, but not to other resources inside RHQ. This way it is possible to give the business people access to the metrics from monitoring the ticket sales.

Resource tree all resourcesResourcetree ticketmonster only user


On the left you see the resource tree below the platform "snert" with all the resources as e.g. the 'rhqadmin' user sees it. On the right side, you see the tree as a user that only has the right to see the TicketMonster server (™").

TODOs

The above is a proof of concept to get this started. There are still some things left to do:

  • Create sub-resorces for performances and report their booking separately
  • Regularly report availability of the TicketMonster server
  • Better separate out the internal code that still lives in the RhqClient class
  • Get the above incorporated into TicketMonster propper - currently it lives in my private github repo
  • Decide how to better handle an RHQ server that is temporarily unavailable
  • Get Forge to create the "send metric / … " code automatically when a class or field has some special annotation for this purpose. Same for the creation of new items like Performances in the ™ case.


If you want to try it, you need a current copy of RHQ master -- the upcoming RHQ 4.6 release will have some of the changes on RHQ side that are needed. The RHQ 4.6 beta does not yet have them.

Wednesday, January 16, 2013

AlertDefinitions in the RHQ REST-Api


I have in the last few days added the support for alert definitions to the REST-Api in RHQ master.
Although this will make it into RHQ 4.6 , it is not the final state of affairs.

On top of the API implementation I have also written 27 tests (for the alert part; at the time of writing this posting) that use Rest Assured to test the api.

Please try the API, give feedback and report errors; if possible as Rest Assured tests, to increase the
test coverage and as an easy way to reproduce your issues.

I think it would also be very cool if someone could write a script in whatever language that exports definitions and applies them to a resource hierarchy on a different server (e.g from test to production)

Thursday, January 10, 2013

Testing REST-apis with Rest Assured

The REST-Api in RHQ is evolving and I had long ago started writing some integration tests against it.
I did not want to do that with pure http calls, so I was looking for a testing framework and found one that I used for some time. I tried to enhance it a bit to better suit my needs, but didn't really get it to work.

I started searching again and this time found Rest Assured, which is almost perfect. Let's have a look at a very simple example:


expect()
.statusCode(200)
.log().ifError()
.when()
.get("/status/server");


As you can see, this is a fluent API that is very expressive, so I don't really need to explain what the above is supposed to do.

In the next example I'll add some authentication


given()
.auth().basic("user","name23")
.expect()
.statusCode(401)
.when()
.get("/rest/status");


Here we add "input parameters" to the call, which are in our case the information for basic authentication, and expect the call to fail with a "bad auth" response.

Now it is tedious to always provide the authentication bits throughout all tests, so it is possible to tell Rest Assured to always deliver a default set of credentials, which can still be overwritten as just shown:


@Before
public void setUp() {
RestAssured.authentication = basic("rhqadmin","rhqadmin");
}


There are a lot more options to set as default, like the base URI, port, basePath and so on.

Now let's have a look on how we can supply other parameters


AlertDefinition alertDefinition = new AlertDefinition(….);

AlertDefinition result =
given()
.contentType(ContentType.JSON)
.header(acceptJson)
.body(alertDefinition)
.log().everything()
.queryParam("resourceId", 10001)
.expect()
.statusCode(201)
.log().ifError()
.when()
.post("/alert/definitions")
.as(AlertDefinition.class);


We start with creating a Java object AlertDefinition that we use for the body of the POST request. We define that it should be sent as JSON and that we expect JSON back. For the URL, a
query parameter with the name 'resourceId' and value '10001' should be appended.
We also expect that the call returns a 201 - created and would like to know the details is this is not the case.
Last but not least we tell RestAssured, that it should convert the answer back into an object of type AlertDefinition which we can then use to check constraints or further work with it.

Rest Assured offers another interesting and built-in way to check constraints with the help of XPath or it's JSON-counterpart JsonPath:


expect()
.statusCode(200)
.body("name",is("discovery"))
.when()
.get("/operation/definition");


In this (shortened) example we expect that the GET-call returns OK and an object that has a body field with the name 'name' and the value 'discovery'.

Conclusion

Rest Assured is a very powerful framework to write tests against a REST/hypermedia api. With its fluent approach and the expressive method names it allows to easily understand what a certain call is supposed to do and to return.

The Rest Assured web site has more examples and documentation. The RHQ code base now also has >70 tests using that framework.

Wednesday, January 09, 2013

A small blurb of what I am currently working on

I have not yet committed and pushed this to the repo and it is sill fragile and most likely to change - and still I want to share it with you:


$ curl -i -u rhqadmin:rhqadmin -X POST \
http://localhost:7080/rest/alert/definitions?resourceId=10001 \
-d @/tmp/foo -HContent-type:application/json
HTTP/1.1 201 Created
Server: Apache-Coyote/1.1
Location: http://localhost:7080/rest/alert/definition/10682
Content-Type: application/json
Transfer-Encoding: chunked
Date: Wed, 09 Jan 2013 21:41:10 GMT

{
"id":10682,
"name":"-x-test-full-definition",
"enabled":false,
"priority":"HIGH",
"recoveryId":0,
"conditionMode":"ANY",
"conditions":[
{"name":"AVAIL_GOES_DOWN",
"category":"AVAILABILITY",
"id":10242,
"threshold":null,
"option":null,
"triggerId":null}
],
"notifications":[
{"id":10432,
"senderName":"Direct Emails",
"config":{
"emailAddress":"enoch@root.org"}
}
]}


In the UI this looks like:

General view
Conditions
Notifications


Other features like dampening or recovery are not yet implemented.

To be complete, the content of /tmp/foo looks like this:


{
"id": 0,
"name": "-x-test-full-definition",
"enabled": false,
"priority": "HIGH",
"recoveryId": 0,
"conditionMode": "ANY",
"conditions": [
{
"id": 0,
"name": "AVAIL_GOES_DOWN",
"category": "AVAILABILITY",
"threshold": null,
"option": null,
"triggerId": null
}
],
"notifications": [
{
"id": 0,
"senderName": "Direct Emails",
"config": {
"emailAddress": "enoch@root.org"
}
}
]
}

Friday, November 09, 2012

JAX-RS 2 client in RHQ (plugins)

When my fellow Red Hatter Bill Burke recently wrote about the new features in JAX-RS 2.0, I started to consider using the new Client api to use in tests and plugins. Luckily Bill provided a 3.0-beta-1 release of RESTEasy that I started with.

To get this going, I started with the plugin case and wrote my few lines of code in the editor. All red, as the classes were not known yet. After many iterations in the editor and the standalone-pc, I ended up with this list of dependencies, which all need to be included in the final plugin jar:

                <artifactItem>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-client</artifactId>
</artifactItem>
<artifactItem>
<groupId>org.jboss.resteasy</groupId>
<artifactId>jaxrs-api</artifactId>
</artifactItem>
<artifactItem>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jaxrs</artifactId>
</artifactItem>
<artifactItem>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
</artifactItem>
<artifactItem>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpcore</artifactId>
</artifactItem>
<artifactItem>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jackson-provider</artifactId>
</artifactItem>
<artifactItem>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-jaxrs</artifactId>
</artifactItem>
<artifactItem>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>
</artifactItem>
<artifactItem>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
</artifactItem>
</artifactItems>


Still I got errors that there is no provider for a media type of application/json available. It turned out, that the RHQ plugin container's classloader does not honor the META-INF/services/javax.ws.rs.ext.Providers entry (or any of the services there, so I had to explicitly enable them in code:


javax.ws.rs.client.Client client = ClientFactory.newClient();
client.configuration().register(org.jboss.resteasy.plugins
.providers.jackson.ResteasyJacksonProvider.class);
client.configuration().register(org.jboss.resteasy.plugins
.providers.jackson.JacksonJsonpInterceptor.class);


Now the client works as intended. At there is quite a number of jars involved, I am thinking of providing an abstract REST plugin, that provides the jars and the basic connection functionality, so that other plugins can build on top just like we have done that with the jmx-plugin.

I was also trying to get this to work with Jersey, but failed somewhere in the maven world, as it failed for me after downloading half of the Glassfish distribution.

One interesting question to me is about the target group of this JAX-RS 2.0 client framework, as the Jersey and RESTEasy implementations seem to grow rather large (which does not matter too much for servers or on the desktop), so that Android apps are less likely to use that and will rather fall back to home grown solutions. With the huge number of mobile phones, this could perhaps the end also mean that nobody will touch the Client framework and falls back to those smaller homegrown solutions on mobile and desktop.

Thursday, October 25, 2012

REST/JAX-RS documentation generation



As I may have mentioned before :-) we have added a REST api to RHQ. And one thing we have clearly found out during development and from feedback by users is that the pure linking feature of REST may not be enough to work with a RESTful api and that some. Our Mark Little posted an article on InfoQ recently that touches the same subject.

Take this example:


@PUT
@Path("/schedule/{id}")
@Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
Response updateSchedule(int scheduleId,
MetricSchedule in,
@Context HttpHeaders headers);


So this is a PUT request on some schedule with an id identifier where a certain MetricSchedule thing should be supplied. And then we get some Response back, which is just a generic
JAX-RS "placeholder".

Now there is a cool project called Swagger with a pretty amazing interactive UI to run REST requests in the browser against e.g. the pet store.

Still this has some limitations
  • You need to deploy wagger, which not all enterprises may want
  • The actual objects are not described as I've mentioned above

Metadata via annotations


The metadata in Swagger to define the semantics of the operations and the simple parameters is defined on the Java source as annotations on the restful methods like this:


@PUT
@Path("/schedule/{id}")
@Consumes({MediaType.APPLICATION_JSON,MediaType.APPLICATION_XML})
@ApiOperation(value = "Update the schedule (enabled, interval) ",
responseClass = "MetricSchedule")
@ApiError(code = 404, reason = NO_SCHEDULE_FOR_ID)
Response updateSchedule(
@ApiParam("Id of the schedule to query") @PathParam("id") int scheduleId,
@ApiParam(value = "New schedule data", required = true) MetricSchedule in,
@Context HttpHeaders headers);


which is already better, as we now know what the operation is supposed to do, that it will return an
object of type MetricSchedule and for the two parameters that are passed we also get
a description of their semantics.

REST docs for RHQ



I was looking on how to document the stuff for some time and after finding the Swagger stuff it became clear to me that I do not need to re-invent the annotations, but should (re)use what is there. Unfortunately the annotations were deep in the swagger-core module.

So I started contributing to Swagger - first by splitting off the annotations into
their own maven module so that they do not have any dependency onto other modules, which makes
it much easier to re-use them in other projects like RHQ.

Still with the above approach the data objects like said MetricSchedule are not documented. In order to do that as well, I've now added a @ApiClass annotation to swagger-annotations, that allows to also document the data classes (a little bit like JavaDoc, but accessible from an annotation processor). So you can now do:


@ApiClass(value = "This is the Foo object",
description = "A Foo object is a place holder for this blog post")
public class Foo {
int id;
@ApiProperty("Return the ID of Foo")
public int getId() { return id; }


to describe the data classes.

The annotations themselves are defined in the following maven artifact:


<dependency>
<groupId>com.wordnik</groupId>
<artifactId>swagger-annotations_2.9.1</artifactId>
<version>1.1.1-SNAPSHOT</version>
</dependency>

which currently (as I write this) is only available from the snapshots repo


<!-- TODO temporary for the swagger annotations -->
<id>sonatype-oss-snapshot</id>
<name>Sonatype OSS Snapshot repository</name>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
</repository>

The generator



Based on the annotations I have written an annotation processor that analyzes the files and creates an XML document which can then be transformed via XSLT into HTML or DocBookXML (which can then be transformed into HTML or PDF with the 'normal' Docbook tool chain.

Tool chain


You can find the source for the rest-docs-generator in the RHQ-helpers project on GitHhub.

The pom.xml file also shows how to use the generator to create the intermediate XML file.

Examples



You can see some examples of the generated output (for some slightly older version of the generator in the RHQ 4.5.1 release files on sourceforge, as well as the in the documentation for JBoss ON, where our docs team just took the input and fed it into their DocBook tools chain almost without and change.

Postprocessing…



If you are interested to see how the toolchain is used in RHQ, you can look at the pom.xml file from the server/jar module ( search for 'REST-API' ).


Project independent



One thing I want to emphasize here is that the generator with its toolchain is completely independent from RHQ and can be reused for other projects.


Wednesday, June 20, 2012

RHQPocket updated



I have worked over the last days on improving RHQPocket, an Android application that serves as an example client for RHQ. Most of the features of RHQPocket will work (best) with a current version of the RHQ server (from current master).

The following is a recording I made that shows the current state.

Update on RHQPocket from Heiko W. Rupp on Vimeo.



The video has been recorded with QuickTime player with the running emulator. I first tried this with the "classical" emulator, but this used 100% cpu (1 core) and with recording turned on, this was so slow that simple actions that are normally immediate, would take several seconds.

My next try was to film my real handset, but too much environmental light and reflexions made this a bad experience. After I heard that others had successfully installed the hardware accelerated GPU and the virtualized x86 image for the emulator I went that route and the CPU usage from the emulator went from 100% before to around 15-20% (home screen), so I went that route for recording.

You can find the full source code on GitHub - contributions and feedback are very welcome.
If you don't want to build from source, you can install the APK from here.

Thursday, June 14, 2012

RHQ REST api: Support for JSONP (updated)



I have just committed support for JSONP to the RHQ REST-api.

Update: In the first version a special media type was required. This is now removed, as jQuery seems to have issues sending this type. Also the default name for the callback parameter has been renamed to jsonp.



To use it you need to pass a parameter for the callback. Lets look at an example:


$ curl -i -u rhqadmin:rhqadmin \
http://localhost:7080/rest/1/alert?jsonp=foo \
-Haccept:application/json
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Pragma: No-cache
Cache-Control: no-cache
Expires: Thu, 01 Jan 1970 01:00:00 CET
X-Powered-By: Servlet 2.4; JBoss-4.2.0.CR2
Content-Type: application/javascript
Transfer-Encoding: chunked
Date: Thu, 14 Jun 2012 09:25:09 GMT

foo([{"name":"test","resource":{"typeName":null, ……..])


The name of the callback parameter is jsonp and the name of the callback function to return is
foo. In the output of the server you see how the json-data is then wrapped inside foo().
The wrapping will only happen when both the right content-type is requested and the callback parameter is present.

The content type returned is then application/javascript.



Setting the name of the callback parameter

Then name of the callback parameter (jsonp in above example) can be set in web.xml :


<filter>
<filter-name>JsonPFilter</filter-name>
<filter-class>org.rhq.enterprise.rest.JsonPFilter</filter-class>
<init-param>
<param-name>filter.jsonp.callback</param-name>
<param-value>jsonp</param-value>
<description>Name of the callback to use for JsonP /description>
</init-param>
</filter>

Monday, May 28, 2012

RHQ REST api: added support for Group Definitions



I've just added some support for GroupDefinitions (aka "DynaGroups") to RHQ.

The following shows some examples:

List all definitions:

$ curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definitions
[{"name":"group1",
"id":10001,
"description":"just some random test",
"expression":["groupby resource.type.plugin","groupby resource.type.name"],
"recursive":false,
"recalcInterval":180000,
"generatedGroupIds":[10082,10091]},
{"name":"platforms",
"id":10002,
"description":"",
"expression":["resource.type.category = PLATFORM","groupby resource.name"],
"recursive":false,
"recalcInterval":0,
"generatedGroupIds":[10152]}
]
Get a single definition by id:

$ curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definition/10002
{"name":"platforms",
"id":10002,
"description":"",
"expression":["resource.type.category = PLATFORM","groupby resource.name"],
"recursive":false,
"recalcInterval":0,
"generatedGroupIds":[10152]
}


You see in the above examples that the actual expression is encoded as a list with each line being an item in the list. The recalculation interval needs to be given in milliseconds.

Delete a definition (by id):

$ curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definition/10031 -X DELETE
Create a new definition:

$ curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definitions \
-HContent-Type:application/json -HAccept:application/json -X POST \
-d '{"name":"test1","description":"Hello","expression":["groupBy resource.name"]}'
HTTP/1.1 201 Created
Location: http://localhost:7080/rest/1/group/definition/10041


For creation a name is required. The location of the created group definition is returned in the header of the response.

And finally to update a definition:

curl -i --user rhqadmin:rhqadmin http://localhost:7080/rest/1/group/definition/10041?recalculate=true \
-HContent-Type:application/json -HAccept:application/json -X PUT \
-d '{"name":"test4","description":"Hello","expression":["groupBy resource.name"]}'


By passing the query-param recalculate=true we can trigger a re-calculation of the groups defined by this group definition.