Camera Intrinsic Parameter
- ROS melodic ver
- Opencv 3.4 이상
- c++17
- 직접세팅
** TODO
- Docker 사용 Docker 설치 및 Nvidia docker Image도 설치필요
$ sudo apt-get install x11-xserver-utils
$ xhost +
$ docker pull authorsoo/px4:9.0
$ docker run --gpus all -it --ipc=host --expose 22 --net=host --privileged -e DISPLAY=unix$DISPLAY
-v /tmp/.X11-unix:/tmp/.X11-unix:rw -e NVIDIA_DRIVER_CAPABILITIES=all --name calib authorsoo/px4:9.0 bash
rosrun 이 아닌 launch 파일로 실행시길 경우 직접 파라미터 세팅
$ roslaunch mono_cam_calib mono_calib
- Dataset 이용 Dataset
도커 이용시 /home 디렉토리에 파일 존재함
$ rosrun mono_cam_calib mono_cam_calib
$ rosbag play [rosbag 파일명] // rosbag실행
result폴더 Txt파일에 intrinsic, 왜곡, R, T 값들
파일 형식 txt 에서 Yaml 혹은 Json으로 변경
** TODO
MARKDOWN 정리, 실습 for README.md
- 리스트1
- 리스트2
- 리스트3
- 리스트2
- 리스트1
- 리스트2
- 리스트3
텍스트
텍스트
인용1
인용2
인용안의 인용
(1) 인라인 링크
(2) 참조 링크
public struct CGSize {
public var width: CGFloat
public var heigth: CGFloat
...
}
텍스트 굵게
텍스트 취소선
스페이스바를 통한 문장개행
스페이스바를 통한 문장개행
br태그를 사용한 문장개행
br태그를 사용한 문장개행
다음과 같이 체크박스를 표현 할 수 있습니다.
- 체크박스
- 빈 체크박스
- 빈 체크박스
❤️💜💙🤍
왼쪽 정렬 | 가운데 정렬 | 오른쪽 정렬 |
---|---|---|
내용1 | 내용2 | 내용3 |
내용1 | 내용2 | 내용3 |
---
One Paragraph of project description goes here
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
What things you need to install the software and how to install them
Give examples
A step by step series of examples that tell you how to get a development env running
Say what the step will be
Give the example
And repeat
until finished
End with an example of getting some data out of the system or using it for a little demo
Explain how to run the automated tests for this system
Explain what these tests test and why
Give an example
Explain what these tests test and why
Give an example
Add additional notes about how to deploy this on a live system
- Dropwizard - The web framework used
- Maven - Dependency Management
- ROME - Used to generate RSS Feeds
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
We use SemVer for versioning. For the versions available, see the tags on this repository.
- Billie Thompson - Initial work - PurpleBooth
See also the list of contributors who participated in this project.
This project is licensed under the MIT License - see the LICENSE.md file for details
- Hat tip to anyone whose code was used
- Inspiration
- etc
This project's purpose is to create a make Repository with a collection of default settings
If you use this template, you can use this function
- Issue Template
- Pull Request Template
- Commit Template
- Readme Template
- Contribute Template
- Pull Request Build Test(With Github Actions)
click Use this template
and use this template!
- Click
Use this template
button - Create New Repository
- Update Readme and Others(Other features are noted in comments.)
I am looking for someone to help with this project. Please advise and point out.
Please read CONTRIBUTING.md for details on our code
of conduct, and the process for submitting pull requests to us.
- Always0ne - SangIl Hwang - si8363@soongsil.ac.kr
See also the list of contributors who participated in this project.
MIT License
Copyright (c) 2020 always0ne
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
- You know ask good questions. 한국어 비슷한 글
- You are done react official tutorials
- You are done offical Next.js tutorials
- You are good at TypeScript
- You know concepts of Yarn 2
- We use Stitches to style our application
- We prefer flat folders and long file name
$ yarn
$ yarn dev
# upgrade deps with interactive CUI
$ yarn upgrade-interactive
# update yarn version for this project
$ yarn set version latest
.
├── jest # Jest Configurations
├── cypress # Cypress configurations & tests
├── src
│ ├── apis # API Fetch functions
│ ├── assets # Static resources that will be transpiled
│ ├── components # React components
│ ├── hooks # React custom hooks
│ ├── itly # Auto-generated tracking code, see below
│ ├── misc # Ambiguous little things
│ ├── modes # Mold Modes
│ ├── pages # Next.js pages
│ ├── shapes # Mold Shapes
│ ├── shared # Core Utility / Interface
└───└── stitches # Stitches Definition
The contents of itly/
are auto-generated by the Amplitude Data CLI (Iteratively).
To update the tracking plan, run ampli pull
. Find more information in Notion
- We avoid hasty abstraction
- Duplication doesn't matter
- We write test code to save our times and make more value
- Recommend you to read all test posts of Kent
- Don't Solve Problems, Eliminate Them
We use Vercel to deploy our project.
main
branch deployed as production: https://suite-anno-v2.superb-ai.comstage
branch deployed as QA preview (almost same as production): https://stage.suite-anno-v2.superb-ai.comdevelop
branch deployed as preview: https://dev.suite-anno-v2.superb-ai.com- other branches deployed as preview
- set env
$ docker build -t mmdetection3d -f docker/Dockerfile .
Download KITTI 3D detection data Here. Prepare KITTI data splits by running
mkdir ./data/kitti/ && mkdir ./data/kitti/ImageSets
# Download data split
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/test.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/test.txt
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/train.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/train.txt
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/val.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/val.txt
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/trainval.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/trainval.txt
Generate info file
python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti
-
Convert Suite to Kitti Data Format
reference : custom_data.ipynb in https://github.com/Superb-AI-Suite/voda.git
[TODO] Make a script to download asset / label from suite project with login information.
-
Debug : Kitti_GT_3Dbbox_visualization.ipynb in https://github.com/Superb-AI-Suite/voda.git
-
Generate info file
python tools/create_data.py kitti --root-path ./data/superb --out-dir ./data/superb --extra-tag kitti
python tool/train.py configs/configs/parta2/hv_PartA2_secfpn_2x8_cyclic_80e_kitti-3d-car.py
python tool/train.py configs/configs/superb/custom.py
or Using train_demo.ipnb
-
reference : /config/superb/custom.py
-
dataset_type : select in [Kitti, cityspace, waymo, nuscenes], Our code based on Kitti
-
data_root = 'data/superb/', custom data(suite -> kitti)
-
point_cloud_range : velodyne coordinates, x, y, z
-
input_modality : use_lidar=True, use_camera=False
-
resume_from : load pretrained model
-
checkpoint_config = set interval of saving checkpoint and path
ex) dict(interval=3, out_dir='/home/eunsoo/dl/mmdetection3d/checkpoints/')
-
evaluation : set evalutation metric
ex) cfg.evaluation.metric = ['bbox', 'segm']
-
learning rate, batch size
ex) cfg.optimizer = dict(type='SGD', lr=0.0025, momentum=0.9, weight_decay=0.0001)
Error Situation
- Nothing PCD in BBox bbox
- Out of range when appling data augmentation