Tuesday, July 30, 2013

Video Conferencing Project in Java Source Code

My video conferencing project was completed as 300 project for 3rd year.It is not totally completed. There is some bug in here.To solve these bugs and to help other students this project is open.It is first try to make a project open source in this way so it can be modified.Any kind of question against this project will be answered. As this project was created in 3rd year 1st semester and now i nearly completed my BSc. so there will be little description about this.I will try to describe every class and function later when i get the chance.  



For more details and how to build and run go to this link  in github.com.

Latest code and binary release : VideoConference-v1.1
Older code and binary release : VideoConference-v1.0

Fix:
1. When in Video chat the Text chat option hanged for good.


#################################################################################
FEATURE
#################################################################################

1.Multi Chat(Used Threadpole)
2.P2P Chat
3.P2P Audio Chat
4.P2P Video Chat
5.Complete Automated
6.H.263 compression Video
7.raw audio


PREREQUISITE:

1. JUST INSTALL jmf-2.1.1 e



How it will look like -- 

Server:












Client:

 

Client Action:







@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@


Details:



ShowMe
======

Video Conferencing Project in Java with JMF.


This is a video conferencing project.It is not fully completed yet.
By this anyone can have a video chat in a same network, can have text chat in any network , also can send file.

#################################################################################
FEATURE
#################################################################################

1. Multi Chat(Used Threadpole)
2. P2P Chat
3. P2P Audio Chat
4. P2P Video Chat
5. Completely Automated
6. H263 compression Video
7. raw audio

Descriptive feature:

i. Text chat, File transfer are done through server, where server is the middle-man or a relay. Meaning, all of the data are goes through the server. So if anyone has the server running on a public IP , then it can be use in any network. This feature is not true for Video chat.


PREREQUISITE:

1. JUST INSTALL jmf-2.1.1 e 32-bit.
2. Require Java SDK installed 32-bit.


Compatibility:

IDE:
1. Tested with JCreator Pro.
2. Tested with Eclipse.


COMPONENT:

1. A Relay Server.
2. A Client.


1. Relay Server:
----------------
```
i.   ClientListener.java
ii.  ClientMap.java
iii. Clients.java
iv.  Main.java
v.   MessageListener.java
vi.  ServerConstant.java
vii. ServerMonitor.java
viii.ServerStatusListener.java
```

2. Client:
----------
```
i.   AVReceive2.java
ii.  AVTransmit2.java
iii. ClientConstant.java
iv.  ClientListListener.java
v.   ClientListPanel.java
vi.  ClientManager.java
vii. ClientStatusListener.java
viii.ClientWindowListener.java
ix.  FileReceiver.java
x.   FileSender.java
xi.  LoginFrame.java
xii. LogInPanel.java
xiii.Main.java
xiv. MessageRecever.java
xv.  MessagingFrame.java
xvi. VideoTransmit.java
```

Configure:
----------

To run this project you must have 3 different PC.
ex:
1. Server PC
2. Client 1 PC
3. Client 2 PC

You also need to install jmf-2.1.1 e in every PC. After that installation you must check your webcam with JMF application available on your PC. If the webcam is accessible through your JMF app then this project will certainly work.

After that you need to build the Server and Client code with the correct IP and Port in file ServerConstant.java and ClientConstant.java. You can build this project with any JAVA IDE. But I have encouraged to use JCreator Pro.

For Server settings in file ServerConstant.java -

```
public interface ServerConstant
{
    public static final int SERVER_PORT=12345;
    public static final int BACKLOG=100;
    public static final int CLIENT_NUMBER=100;
    public static final int MULTICAST_SENDING_PORT=5555;
    public static final int MULTICAST_LISTENING_PORT=5554;
    public static final String MULTICAST_ADDRESS="239.0.0.1";
    public static final String DISCONNECT_STRING="DISCONNECT";
    public static final String MESSAGE_SEPARATOR=" >> ";
   
}
```

SERVER_PORT=12345 is the port in which server is listening/waitting for client.


For Server settings in file ClientConstant.java -
```
public interface ClientConstant
{
    public static final String SERVER_ADDRESS="172.17.0.32";
    public static final int SERVER_PORT=12345;
    public static final int CLIENT_NUMBER=100;
    public static final String DISCONNECT_STRING="DISCONNECT";
    public static final String MESSAGE_SEPARATOR=" >> ";
}
```

SERVER_ADDRESS="172.17.0.32" is the address of the server PC and SERVER_PORT=12345 is the server listening port.

Try Out:
--------

After all of the installation, settings and build just RUN the server on a PC by server Jar or from your IDE run. Then from Server GUI just click on *Start* button.

Now from another 2 PC use the client jar or run from IDE and provide Username on the User window and click *Signin*.
Now you are done. Now any user login in your server can see the other user who are logged in to your server.

Then from friendlist click on the friend and a new window will arise for chat. On that window there will be a video chat start button and a text chat send button. You can use any one for you specific cause.

Troubleshooting:
----------------

If you don't see the video starting please change the value of your webcam's descriptor("vfw://0") on class *MessageRecever.java*

```
AVTransmit2 vt = new AVTransmit2(new MediaLocator("vfw://0"),pt,"20006",null);
AVTransmit2 at = new AVTransmit2(new MediaLocator("javasound://8000"),pt,"20008",null);
```
It may be --

1. vfw://0
2. vfw://1
3. vfw://2 , etc.

JCreator Pro build issue:

If you are using Jcreator you must have to add JMF library path in project properties. It can be done by

*Project->Project Settings->Required Libraries* and add JMF folders library name with location as archive.

Known Issue:

1. We don't have SIP.
2. We don't have STUN, TURN.
3. When in a video call you can't have a text chat session.
4. We don't have any authentication mechanism for user sign in.
5. File transfer is disabled.




Easy duplex Alsa Audio capture and playback for upto CentOS 6.x and Ubuntu with easy API

Advanced Linux Sound Architecture (known by the acronym ALSA) is a free and open source software framework released under the GNU GPL and the GNU LGPL that provides an API for sound card device drivers. It is part of the Linux kernel. Some of the goals of the ALSA project at its inception were automatic configuration of sound-card hardware and graceful handling of multiple sound devices in a system, goals which it has largely met. A couple of different frameworks, such as JACK, use ALSA to allow performing low-latency professional-grade audio editing and mixing.

Concepts

This section provides an overview of basic concepts pertaining to ALSA.
Typically, ALSA supports up to eight cards, numbered 0 through 7; each card is a physical or logical kernel device capable of input, output, or control of sound, and card number 0 is used by default when no particular card is specified. Furthermore, each card may also be addressed by its id, which is an explanatory string such as "Headset" or "ICH9".
A card has devices, numbered starting at 0; a device may be of playback type, meaning it outputs sound from the computer, or some other type such as capture, control, timer, or sequencer; device number 0 is used by default when no particular device is specified.
A device may have subdevices, numbered starting at 0; a subdevice represents some relevant sound endpoint for the device, such as a speaker pair. If the subdevice is not specified, or if subdevice number -1 is specified, then any available subdevice is used.
A card's interface is a description of an ALSA protocol for accessing the card; possible interfaces include: hw, plughw, default, and plug:dmix. The hw interface provides direct access to the kernel device, but no software mixing or stream adaptation support. The plughw and default enable sound output where the hw interface would produce an error.
An application typically describes sound output by combining all of the aforementioned specifications together in a device string, which has one of the following forms (which are case sensitive):
  • interface:card,device,subdevice
  • interface:CARD=1,DEV=3,SUBDEV=2.
An ALSA stream is a data flow representing sound; the most common stream format is PCM that must be produced in such a way as to match the characteristics or parameters of the hardware, including:
  • sampling rate: 44.1 kHz on home stereos, and 48 kHz on home theaters
  • sample width: measured in some number of bits per sample (such as 8, 16, 24, or 32 bits/sample)
  • sample encoding
  • number of channels: 1 for mono, 2 for stereo, or 6 for AC-3/IEC958

Now what IdeaAudio provide?
---------------------------------

IdeaAudio  provide you a shared library (.so) and an Interface . Then just need to add this library and interface in your project and use ALSA audio. It also provide a SampleApp compatible in Eclipse IDE with demonstrating use of a audio device.

Download:
------------

SampleApp source code and library Link

Direct Binary and .so Download:
-------------------
Link

Test:
-----

Successful test log is also provided in the link as IdeaAudio.log

API documentation:
----------------------
struct DeviceInfo{
    std::string m_sDeviceName;
    std::string m_sDeviceID;

};

/**
*    DeviceInfo stored the available audio devices information
*    @param m_sDeviceName is the name of the device
*    @param m_sDeviceID is the Device ID (ex: plughw:0,0 , plughw:1,0 etc.)
*/


int AvailableAudioDevices(bool bInput, std::vector<DeviceInfo>&Device);

/**
*    AvailableAudioDevices  used to find the available audio devices as plughw:X,Y(ex: plughw:0,0 plughw:1,0 etc.)
*    @param bInput is true for capture and false for playback
*    @param Device is the list of Device Info found after the execution of this API
*    @return number of audio devices.
*/

void ConfigureBetterOptionForAudio(char *sDeviceName, int &iSamplerate, int &iChannels, int &iBufferSize, int &iFragments);

 Note: Right now not active. 

/**
*    ConfigureBetterOptionForAudio autometically find the values for particular device
*    @param sDeviceName is the device name for which evalution is needed
*    @param iSamplerate is the sampling rate of audio data (ex: 48000 Hz)
*     @param iChannels is the channel number need to be provided
*    @param iBufferSize is the buffer size for which we are evaluating
*    @param iFragments the fragment which will be provided after evalution

*/


int OpenCaptureDevice(char *sDeviceName, int iSamplerate, int iChannels, int iBufferSize, int iFragments);


/**
*    OpenCaptureDevice is used to open the audio capture device
*    @param sDeviceName is the deviceID from std::vector<DeviceInfo>&Device (ex: plughw:0,0 plughw:1,0 etc.)
*    @param iSamplerate is the sampling rate of audio data (ex: 48000 Hz)
*     @param iChannels is the channel number
*    @param iBufferSize is the buffer size
*    @param iFragments is the fragment number
*/

int OpenPlaybackDevice(char *sDeviceName, int iSamplerate, int iChannels, int iBufferSize, int iFragments);


/**
*    OpenPlaybackDevice is used to open the audio playback device
*    @param sDeviceName is the deviceID from std::vector<DeviceInfo>&Device (ex: plughw:0,0 plughw:1,0 etc.)
*    @param iSamplerate is the sampling rate of audio data (ex: 48000 Hz)
*     @param iChannels is the channel number
*    @param iBufferSize is the buffer size
*    @param iFragments is the fragment number
*/

void StartCapture(char *pBuff, int nLen);

/**
*    StartCapture start the capturing from open capture audio device
*    @param pBuff is the buffer which is full after a successful capture
*    @param nLen is the size of the needed data buffer
*/

void StartPlayback(char *pBuff, int nLen);

/**
*    StartPlayback start the  playback on the open playback audio device
*    @param pBuff is the buffer which is full and need to play
*    @param nLen is the size of the data buffer
*/


void CloseCapture();

/**
*    CloseCapture close opened capture audio device
*/


void ClosePlayback();

/**
*    ClosePlayback close opened playback audio device
*/


virtual void OnAudioDataCaptured(char *pBuff, int iLen) = 0;
 
 Note: Right now not working perfectly. 

/**
*    OnAudioDataCaptured is an event , fired after a full length capture
*    @param pBuff is the buffer which is full after a successful capture
*    @param nLen is the size of the captured data buffer*/

PIPILIKA the Bangla Search Engine

It is nearly 2 years since we attend our University and work for such a great project like SEARCH Engine.
All the more it is also language specific .It works with bangla search with meaningful search following context.

In this great project I work on making a distributed Index by hadoop.Our work was on thesis level, which is later implemented.All the more when we see the beta release version of our little child PIPILIKA it energies our  every living orgasm of my body. The release date is fixed on April 13 6.30 pm BD time.Venue is Shonar bangla hotel (Sheratoon) .We will attend as we are invited for working on this project .Keep eye on this search engine before it becomes a giant.

Link PIPILIKA.COM

Wednesday, July 24, 2013

Developing microsoft LYNC client in c++

First for one of my visitors wish...I am going to start from step 13 .I will describe the previous step later.
Step 13 : gssapi-data generation .  It is the most crucial part of LYNC developement. It gives me lot of pain.
I will describe it easily...for some reason I cann't give you the code.

I will go to the process in straight forward way..

First take the challenge data that is gssapidata: from second 401 unauthorize response then decode it in Base64 .As you know there are three type in NTLM ....
They are  >>

Type1:This message contains the host name and the NT domain name of the client.
Type2: This message contains the server's NTLM challenge
Type3: This message contains the username, host name, NT domain name, and the two "responses".

You already understand what are the we need.This is Type2 from server as challenge and Type3 which you will send.For more detail about type see the link >>>ALL about NTLM TYPE

Please don't see their message format in stream of data.

The format of challenge data for type 2 is perfect as >>http://msdn.microsoft.com/en-us/library/cc236642%28v=prot.13%29.aspx

For more easiness I will describe here because it pains me so much.

Type two message format is like>>>

    signature[8];
    int 32bit    message_type;


    unsigned short    length |

    unsigned short    space  | -----------target name;


    unsigned int         offset |

     
    int 32bit    flags;
    unsigned char    challenge[8];
    guint8  zero1[8];
   

   unsigned short    length |

    unsigned short    space  | -----------target info;


    unsigned int         offset | 

       

    int 8bit  product_major_version |
    int 8bit  product_minor_version |
    int 16bit product_build                |--Version;  
    int 8bit  zero2[3]                          |
    int 8bit  ntlm_revision_current   |







   


Just broke the decrypted gssapi-data in that format thats all. 

Target name is just domain name take it as unicode .Now the target info it holds time info of server as milliseconds for synchronization or security as it is.AvPair

Allaways keep in mind that all the data store is in ARRAY[OFFSET]

 example:

unsigned char* target info;
target info = (unsigned char*)&(sDecoded64.c_str()[target_info.offset]);

You understand what i have indicated...... :)

Tuesday, July 23, 2013

Easy understanding of RTP packets.

For any kind of real-time media transmission over internet it is mandatory to follow some specification. We know that the basic data that are transmitted over internet are UDP or TCP packet. Now the main fact is that how the data of specific (ICMP, RTP, RTCP ... etc) type  will transmit.


Now here we will talk about RTP packets.

The RTP --- means --  Real time transport protocol(as we all know!!!!)

The protocol details are given on rfc3550 .

RTP packets:

Q.1. How RTP Transmission packets are constructed ?
Q.2. What is the format of RTP packet ?
Q.3. Describe the RTP header.  

Q. 1. How RTP Transmission packets are constructed ?
Ans:  First of all RTP Transmission packets are not encrypted. They are the raw data of media encoder. For example a h264 encoder creates the raw data by taking input from any video media source and a G722 audio encoder creates the raw data by taking input from any audio source. These data are stored as payload  in a RTP packets without any encryption.  Now the most important part is how the data will arrange in a UDP/TCP packet so that network/any receiver will understand it as a RTP packet. For easy understanding we will take UDP transport for RTP packet and the codec will be x-vrtc1 (121) .


 An RTP transmission packet has 3 part for a successful transmission.

Part 1: IP (source and destination)
Part 2: Transmission protocol and port (source and destination)

Part 1 + Part 2  = 42 byte (UDP)

Part 3: RTP packets  (size depends on network MTU)


Q. 2. What is the format of RTP packet ?
Ans: In basic word it is just an equation.The equation is --

RTP = RTP-header (12-bytes) + RTP-Payload (n- byte).

In details it is like -- 

    0                   1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |V=2|P|X|  CC   |M|     PT      |       sequence number         |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                           timestamp                           |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |           synchronization source (SSRC) identifier            |
   +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
 
 
 
 
First 12-bytes of a RTP packet is RTP-header.
The next n-bytes are RTP-Payload as data.


So now you know everything. Then , what is the crucial part ?
 The crucial part is that the -- Payload is divided into 2 part.

Payload = Payload - header + Payload - data.

Example:  h263 (34) Payload = 4 - byte Payload header + n - byte Payload data. 

If you miss to differentiate these 2 part most of the time your decoder will fail to decode the data.


 Q. 3. Describe the RTP header. 
Ans:   If you ask me I'll say there is nothing so hard that you need to describe.

1.  Version (v=2):  constant.

2.  Padding (p): how much data you need to ignore.

3.  Extension (X): Not so important to implement.

4.  CSRC count (CC): Important when some media profile specific extension are enforced. It is basically the info of media source. I mean the received packets or sending packets SRC info.

5.  Marker (M): Most important to identify a frame end. If it is 1 that means it is the last packet of a complete frame.

6. Payload Type (PT): which is used to determine the codec. For x-vrtc1 it is 121 and for h263 it is 34.

7. Sequence number: Differentiate one packet from other and maintain the sequence so that a receiver can create a complete packet.

8. Timestamp: Use to identify the delay between packets.

9. SSRC: Source Identifier which uniquely  identify the RTP packet for particular media line on a conference or a call.



That's all !!!!

You are good to go.




How to Generate and use the ssh key on Gerrit, github.io, gitlab, and bitbucket.

 Details can be found here -