stroke size controlling
godofdream opened this issue · comments
In your paper you mention changing the decoder of adain to change the stroke size.
what is the difference between the decoders e.g. "decoder_stroke_perceptual_loss_1.pth" ?
In my case, I would like to convert an ultra high resolution (8192x8192), but applying the same "huge" stroke size, as I would if I would resize the picture to 1024x1024
Hi, using the model trained with stroke perceptual loss can increase the stroke size to a certain extent.
But due to the limitation of the effective receptive field, 8192x8192 images still can't get the stroke size as 1024x1024 image.
I understand. Would this be overcomeable with TINs of the TIN? so basically creating a pyramid?
Also, do you think https://github.com/LouieYang/stroke-controllable-fast-style-transfer could be adapted for URST?
Did you think about spatial control in URST?
(By the way, URST is impressive, Congrats)
Yes, URST can be adapted to stroke-controllable-fast-style-transfer. The only problem is that this code is TensorFlow, I don't know how to implement it immediately.
I have tried to adapt URST to another stroke control method - Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer. It's very easy to implement, and we can get very large brush strokes. I would clean up the code and make it public in recent days.
Hi @czczup thanks for your work! You mentioned the following, so have you made any progress on it?
I have tried to adapt URST to another stroke control method - Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer. It's very easy to implement, and we can get very large brush strokes. I would clean up the code and make it public in recent days.
Hi @czczup thanks for your work! You mentioned the following, so have you made any progress on it?
I have tried to adapt URST to another stroke control method - Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer. It's very easy to implement, and we can get very large brush strokes. I would clean up the code and make it public in recent days.
Hello, thanks for your attention. I'm sorry I can't find the code implemented at that time. I can try to re-implement it, but it will take some time.
It's okay. Thanks for reply.
It's okay. Thanks for reply.
Hi, Multimodal Transfer is ready now.
https://github.com/czczup/URST/tree/main/Wang2017Multimodal
It's okay. Thanks for reply.
Hi, Multimodal Transfer is ready now.
I'm gonna go back read that paper. Got distracted to stable diffusion buzz. Thanks for your hard work!!