![]()
In this blog post we will be going to create a data pipeline where the Jira (Project Management tools) using as a source in Azure Data Factory and then the destination will be the Azure blob storage.
Step by Step process
- Step 1- Create an account in JIRA (if you don’t have)
- Step 2- Create some dummy tickets (if you don’t have)
Epic | Story | Task |
📹 Camera Feed Integration | Connect CCTV camera feed via RTSP | Research RTSP protocol |
Write ingestion script for RTSP | ||
Support ONVIF protocol | Research ONVIF standard | |
Implement ONVIF connector | ||
Store video streams in database | Set up storage (e.g., S3/Blob) | |
Write stream-to-storage pipeline | ||
🤖 Object Detection Module | Train object detection model (YOLO/ResNet) | Collect training dataset |
Preprocess dataset | ||
Train base model | ||
Integrate model with live stream | Build inference pipeline | |
Optimize for GPU/Edge devices | ||
⚠️ Anomaly & Alerting System | Detect motion/suspicious activity | Research anomaly detection algorithms |
Implement motion detection logic | ||
Send alerts on anomaly detection | Integrate with Twilio/Email | |
Build alert dashboard | ||
📊 Dashboard & Monitoring | Build real-time monitoring dashboard | UI design for dashboard |
Integrate video + detection overlay | ||
Implement filtering/search | ||
Generate analytics reports | Create reporting pipeline | |
Export reports to PDF/Excel | ||
🔒 Security & Authentication | Implement role-based authentication | Integrate with OAuth2/JWT |
Create Admin/User roles | ||
Secure video storage & streams | Apply encryption (SSE-KMS) | |
Implement access logging | ||
☁️ Cloud & Scaling | Deploy on cloud (AWS/Azure) | Create infrastructure (Terraform/ARM) |
Set up CI/CD pipeline | ||
Scale with Kubernetes | Create Helm charts | |
Configure auto-scaling |
- Step 3- Create Jira API keys from here
- https://id.atlassian.com/manage-profile/security/api-tokens
- Username will be your email address (abc@gmail.com)
- Then you need to save your API token somewhere
These are some API endpoints available in Jira
Endpoint | Method | Description | Example |
/rest/api/3/serverInfo | GET | Get Jira server details (version, deployment). |
|
/rest/api/3/myself | GET | Get current authenticated user details. | |
/rest/api/3/project | GET | List all projects visible to the user. | |
/rest/api/3/project/{projectKey} | GET | Get details of a specific project. | |
/rest/api/3/issue/{issueKey} | GET | Get details of a specific issue. | |
/rest/api/3/search?jql=… | GET | Search issues using JQL (pagination supported). | https://<domain>.atlassian.net/rest/api/3/search?jql=project=TEST |
/rest/api/3/priority | GET | Get list of issue priorities. | |
/rest/api/3/issuetype | GET | Get all available issue types. | |
/rest/api/3/user/search?query=email | GET | Search for users. | https://<domain>.atlassian.net/rest/api/3/user/search?query=abhi@example.com |
Step 6- Create a Pipeline in Azure Data factory
- Use Copy Data and click on Source
- Search for Rest and create a rest service named as JIRA
- Click on New linked service
- The Base URL is https://<domain>.atlassian.net/rest/api/3/project
- Authentication Type is Basic
- The username is your email address
- The password is your API token
Step 7- Now click on Preview data (if it is not working create another api key)

Step 8- configure the sink by creating a blob storage
Step 9- Publish the pipeline and trigger it.
Conclusion
In conclusion, integrating Jira with Azure Data Factory enables seamless automation, efficient data transfer to Azure Blob Storage, and empowers teams with better insights for smarter decision-making and streamlined workflows.
Tell me in the comments which method you like the most. And if you have any problems regarding implementation, feel free to drop a comment, and I will reply within 24 hours.
If you like the article and would like to support me, make sure to: