unchained.guru

an old bear's open technology watch

“Buy the ticket, take the ride.”

In the twisted, chaotic world that we live in, artificial intelligence has found its way into the realm of image generation. So, buckle up and get ready for a wild adventure into the world of BulkAI, a software that generates AI images in bulk like some sort of psychedelic trip through the digital realm.

Disclaimer: The automation of User Discord, Midjourney and Bluewillow accounts is a violation of each Terms of Service & Community guidelines and could result in your account(s) being terminated.

bulkai was written as a proof of concept and the code has been released for educational purposes only. The authors and this humble article are released of any liabilities which your usage may entail.

Let’s dive headfirst into this bizarre world of automated artistry.

Step 1: Installing the Beast

The first thing you’ll need to do is tame the beast known as BulkAI. You can use the golang binary to install it:

go install github.com/igolaizola/bulkai/cmd/bulkai@latest

Or download the binary from the releases if you prefer the pre-packaged experience1.

If you want to go deeper, make process is a breeze:

git clone https://github.com/igolaizola/bulkai.git
make build

Step 2: Creating the Session

Before you can unleash the full potential of BulkAI, you must first create a session file with your Discord credentials and other browser information. This is necessary for BulkAI to login to Discord, mimic your browser, and avoid being detected as a bot. To create the session file, use the bulkai create-session command, which will open a chrome window and prompt you to login to Discord.

Note that you can easily do this on your desktop, and use the generated session.yaml on your webserver :)

Step 3: Configuring the Settings

Now that BulkAI is set up, it’s time to configure the settings. Create a YAML configuration file with your desired settings, like the following example bulkai.yaml:

bot: midjourney
album: uchrony
download: true
upscale: true
variation: false
thumbnail: true
suffix: " --ar 2:3"
prompt:
  - Richard Nixon as hippie peace-maker, 70s LSD electoral paper
  - James Brown as Vice President of the USA, 70s LSD electoral paper

And then run the command:

bulkai generate --config bulkai.yaml

For a detailed list of available parameters, see the original documentation provided1.

Step 4: Launching the Madness

Bluewillow and Midjourney are easy to manage when using personal creations’ URLs Finally, unleash the power of BulkAI by running the bulkai generate command. Images will be downloaded to the album name in the output directory specified in the configuration file. You can watch the progress in the console, but beware: this task may take a while depending on the number of images to generate.

If at any point you need to stop the generation, simply press Ctrl+C. To resume the generation, run the command again using the same settings and album name. The prompt field will be ignored, and the prompts will be loaded from the album.

And there you have it! BulkAI will now churn out AI-generated images like some sort of twisted art factory, creating fabulous images so quickly. Please don’t hesitate to

“When the going gets weird, the weird turn pro.” So embrace the strange world of AI-generated art and let BulkAI take you on a wild ride through the digital realm.

Some balkai.yaml samples if you’re not inspired:

bot: bluewillow
album: trippy-landscapes
download: true
upscale: true
variation: false
thumbnail: true
suffix: “ --ar 16:9”
prompt:
 - surreal steampunk city surrounded by spaceships, global
 - NYC streets full of a psychedelic crowd on strike, TV screengrab
 - underwater city with neonpunk lights and fish
bot: midjourney
album: hypnotic-portraits
download: true
upscale: true
variation: true
thumbnail: true
suffix: “ --ar 3:2 --v 5”
prompt:
 - portrait of a person with swirling eyes
 - face with an ever-changing, morphing expression
 - cubist portrait inspired by Moebius

Bye! Go AI Go!

Quotes are from Hunther S. Thompson. Pictures were easily composed with MidJourney by the author.

The Invisible Internet Project (I2P) is a fully encrypted private network layer that has been developed with privacy and security by design in order to provide protection for your activity, location and your identity. The software ships with a router that connects you to the network and applications for sharing, communicating and building.

I2P Cares About Privacy

I2P hides the server from the user and the user from the server. All I2P traffic is internal to the I2P network. Traffic inside I2P does not interact with the Internet directly. It is a layer on top of the Internet. It uses encrypted unidirectional tunnels between you and your peers. No one can see where traffic is coming from, where it is going, or what the contents are. Additionally I2P offers resistance to pattern recognition and blocking by censors. Because the network relies on peers to route traffic, location blocking is also reduced.

How to Connect to the I2P Network

The Invisible Internet Project provides software to download that connects you to the network. In addition to the network privacy benefits, I2P provides an application layer that allows people to use and create familiar apps for daily use. I2P provides its own unique DNS so that you can self host or mirror content on the network. You can create and own your own platform that you can add to the I2P directory or only invite your friends. The I2P network functions the same way the Internet does. When you download the I2P software, it includes everything you need to connect, share, and create privately.

An Overview of the Network

I2P uses cryptography to achieve a variety of properties for the tunnels it builds and the communications it transports. I2P tunnels use transports, NTCP2 and SSU, to hide the nature of the traffic being transported over it. Connections are encrypted from router-to-router, and from client-to-client(end-to-end). Forward-secrecy is provided for all connections. Because I2P is cryptographically addressed, I2P addresses are self-authenticating and only belong to the user who generated them.

I2P is a secure and traffic protecting Internet-like layer. The network is made up of peers (“routers”) and unidirectional inbound and outbound virtual tunnels. Routers communicate with each other using protocols built on existing transport mechanisms (TCP, UDP, etc), passing messages. Client applications have their own cryptographic identifier (“Destination”) which enables it to send and receive messages. These clients can connect to any router and authorize the temporary allocation (“lease”) of some tunnels that will be used for sending and receiving messages through the network. I2P has its own internal network database (using a modification of the Kademlia DHT) for distributing routing and contact information securely.

About Decentralization and I2P

The I2P network is almost completely decentralized, with exception to what are called “Reseed Servers,” which is how you first join the network. This is to deal with the DHT ( Distributed Hash Table ) bootstrap problem. Basically, there's not a good and reliable way to get out of running at least one permanent bootstrap node that non-network users can find to get started. Once you're connected to the network, you only discover peers by building “exploratory” tunnels, but to make your initial connection, you need to get a peer set from somewhere. The reseed servers, which you can see listed on http://127.0.0.1:7657/configreseed in the Java I2P router, provide you with those peers. You then connect to them with the I2P router until you find one who you can reach and build exploratory tunnels through. Reseed servers can tell that you bootstrapped from them, but nothing else about your traffic on the I2P network.

I see IP addresses of all other I2P nodes in the router console. Does that mean my IP address is visible by others?

Yes, this is how a fully distributed peer-to-peer network works. Every node participates in routing packets for others, so your IP address must be known to establish connections. While the fact that your computer runs I2P is public, nobody can see your activities in it. You can't say if a user behind this IP address is sharing files, hosting a website, doing research or just running a node to contribute bandwidth to the project.

What I2P Does Not Do

The I2P network does not officially “Exit” traffic. It has outproxies to the Internet run by volunteers, which are centralized services. I2P is primarily a hidden service network and outproxying is not an official function, nor is it advised. The privacy benefits you get from participating in the the I2P network come from remaining in the network and not accessing the internet. I2P recommends that you use Tor Browser or a trusted VPN when you want to browse the Internet privately.


#security

#css #css3d #codepen


Faking “I read my newspaper in a bar” atmosphere

  1. Thanks to Drew for the background picture.

  2. Let the magic begins by setting the transform-style to preserve-3d.

  3. We're gonna play with transform properties: perspective(0deg), rotateX(), rotateY(), rotateZ() and scaleY().

  4. A #CSS transition is added on body:hover to stand the newspaper up on mouse over.

Code is viewable and updatable on Codepen:

About the project

The Daily Jungle was created in #PHP in 2001 and became one of the first French news source validated by early Google News.

The project stopped in 2006 after the split of the original team and my departure from Marseilles, but it was 5 fabulous years with a great community of readers and writers.

We lost the sources, but most of the contents are available on archive.org

#python


This project proves how Python is powerful when it's well used

EG3D

We're talking about generating a reliable 3D object model by studying a single picture.

It won't take long before generating a deepfake with a single photograph 😨


Efficient Geometry-aware 3D Generative Adversarial Networks (EG3D)
Official PyTorch implementation of the CVPR 2022 paper

Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein
* equal contribution

https://nvlabs.github.io/eg3d/

Abstract: Unsupervised generation of high-quality multi-view-consistent images and 3D shapes using only collections of single-view 2D photographs has been a long-standing challenge. Existing 3D GANs are either compute-intensive or make approximations that are not 3D-consistent; the former limits quality and resolution of the generated images and the latter adversely affects multi-view consistency and shape quality. In this work, we improve the computational efficiency and image quality of 3D GANs without overly relying on these approximations. We introduce an expressive hybrid explicit-implicit network architecture that, together with other design choices, synthesizes not only high-resolution multi-view-consistent images in real time but also produces high-quality 3D geometry. By decoupling feature generation and neural rendering, our framework is able to leverage state-of-the-art 2D CNN generators, such as StyleGAN2, and inherit their efficiency and expressiveness. We demonstrate state-of-the-art 3D-aware synthesis with FFHQ and AFHQ Cats, among other experiments.

For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing

Requirements

  • We recommend Linux for performance and compatibility reasons.
  • 1–8 high-end NVIDIA GPUs. We have done all testing and development using V100, RTX3090, and A100 GPUs.
  • 64-bit Python 3.8 and PyTorch 1.11.0 (or later). See https://pytorch.org for PyTorch install instructions.
  • CUDA toolkit 11.3 or later. (Why is a separate CUDA toolkit installation required? We use the custom CUDA extensions from the StyleGAN3 repo. Please see Troubleshooting).
  • Python libraries: see environment.yml for exact library dependencies. You can use the following commands with Miniconda3 to create and activate your Python environment:
    • cd eg3d
    • conda env create -f environment.yml
    • conda activate eg3d

Getting started

Pre-trained networks are stored as *.pkl files that can be referenced using local filenames. See Models for download links to pre-trained checkpoints.

Generating media

# Generate videos using pre-trained model

python gen_videos.py --outdir=out --trunc=0.7 --seeds=0-3 --grid=2x2 \
    --network=networks/network_snapshot.pkl

# Generate the same 4 seeds in an interpolation sequence

python gen_videos.py --outdir=out --trunc=0.7 --seeds=0-3 --grid=1x1 \
    --network=networks/network_snapshot.pkl
# Generate images and shapes (as .mrc files) using pre-trained model

python gen_samples.py --outdir=out --trunc=0.7 --shapes=true --seeds=0-3 \
    --network=networks/network_snapshot.pkl

We visualize our .mrc shape files with UCSF Chimerax.

To visualize a shape in ChimeraX do the following: 1. Import the .mrc file with File > Open 1. Find the selected shape in the Volume Viewer tool 1. The Volume Viewer tool is located under Tools > Volume Data > Volume Viewer 1. Change volume type to “Surface” 1. Change step size to 1 1. Change level set to 10 1. Note that the optimal level can vary by each object, but is usually between 2 and 20. Individual adjustment may make certain shapes slightly sharper 1. In the Lighting menu in the top bar, change lighting to “Full”

Interactive visualization

This release contains an interactive model visualization tool that can be used to explore various characteristics of a trained model. To start it, run:

python visualizer.py

See the Visualizer Guide for a description of important options.

Using networks from Python

You can use pre-trained networks in your own Python code as follows:

with open('ffhq.pkl', 'rb') as f:
    G = pickle.load(f)['G_ema'].cuda()  # torch.nn.Module
z = torch.randn([1, G.z_dim]).cuda()    # latent codes
c = torch.cat([cam2world_pose.reshape(-1, 16), intrinsics.reshape(-1, 9)], 1) # camera parameters
img = G(z, c)['image']                           # NCHW, float32, dynamic range [-1, +1], no truncation

The above code requires torch_utils and dnnlib to be accessible via PYTHONPATH. It does not need source code for the networks themselves — their class definitions are loaded from the pickle via torch_utils.persistence.

The pickle contains three networks. 'G' and 'D' are instantaneous snapshots taken during training, and 'G_ema' represents a moving average of the generator weights over several training steps. The networks are regular instances of torch.nn.Module, with all of their parameters and buffers placed on the CPU at import and gradient computation disabled by default.

The generator consists of two submodules, G.mapping and G.synthesis, that can be executed separately. They also support various additional options:

w = G.mapping(z, conditioning_params, truncation_psi=0.5, truncation_cutoff=8)
img = G.synthesis(w, camera_params)['image]

Please refer to gen_samples.py for complete code example.

Preparing datasets

Datasets are stored as uncompressed ZIP archives containing uncompressed PNG files and a metadata file dataset.json for labels. Each label is a 25-length list of floating point numbers, which is the concatenation of the flattened 4x4 camera extrinsic matrix and flattened 3x3 camera intrinsic matrix. Custom datasets can be created from a folder containing images; see python dataset_tool.py --help for more information. Alternatively, the folder can also be used directly as a dataset, without running it through dataset_tool.py first, but doing so may lead to suboptimal performance.

FFHQ: Download and process the Flickr-Faces-HQ dataset using the following commands.

  1. Ensure the Deep3DFaceRecon_pytorch submodule is properly initialized

    git submodule update --init --recursive
    
  2. Run the following commands

    cd dataset_preprocessing/ffhq
    python runme.py
    

Optional: preprocessing in-the-wild portrait images. In case you want to crop in-the-wild face images and extract poses using Deep3DFaceRecon_pytorch in a way that align with the FFHQ data above and the checkpoint, run the following commands

cd dataset_preprocessing/ffhq
python preprocess_in_the_wild.py --indir=INPUT_IMAGE_FOLDER

AFHQv2: Download and process the AFHQv2 dataset with the following.

  1. Download the AFHQv2 images zipfile from the StarGAN V2 repository
  2. Run the following commands: .bash cd dataset_preprocessing/afhq python runme.py "path/to/downloaded/afhq.zip"

ShapeNet Cars: Download and process renderings of the cars category of ShapeNet using the following commands. NOTE: the following commands download renderings of the ShapeNet cars from the Scene Representation Networks repository.

cd dataset_preprocessing/shapenet
python runme.py

Training

You can train new networks using train.py. For example:

# Train with FFHQ from scratch with raw neural rendering resolution=64, using 8 GPUs.
python train.py --outdir=~/training-runs --cfg=ffhq --data=~/datasets/FFHQ_512.zip \
  --gpus=8 --batch=32 --gamma=1 --gen_pose_cond=True

# Second stage finetuning of FFHQ to 128 neural rendering resolution (optional).
python train.py --outdir=~/training-runs --cfg=ffhq --data=~/datasets/FFHQ_512.zip \
  --resume=~/training-runs/ffhq_experiment_dir/network-snapshot-025000.pkl \
  --gpus=8 --batch=32 --gamma=1 --gen_pose_cond=True --neural_rendering_resolution_final=128

# Train with Shapenet from scratch, using 8 GPUs.
python train.py --outdir=~/training-runs --cfg=shapenet --data=~/datasets/cars_train.zip \
  --gpus=8 --batch=32 --gamma=0.3

# Train with AFHQ, finetuning from FFHQ with ADA, using 8 GPUs.
python train.py --outdir=~/training-runs --cfg=afhq --data=~/datasets/afhq.zip \
  --gpus=8 --batch=32 --gamma=5 --aug=ada --neural_rendering_resolution_final=128 --gen_pose_cond=True --gpc_reg_prob=0.8

Please see the Training Guide for a guide to setting up a training run on your own data.

Please see Models for recommended training configurations and download links for pre-trained checkpoints.

The results of each training run are saved to a newly created directory, for example ~/training-runs/00000-ffhq-ffhq512-gpus8-batch32-gamma1. The training loop exports network pickles (network-snapshot-<KIMG>.pkl) and random image grids (fakes<KIMG>.png) at regular intervals (controlled by --snap). For each exported pickle, it evaluates FID (controlled by --metrics) and logs the result in metric-fid50k_full.jsonl. It also records various statistics in training_stats.jsonl, as well as *.tfevents if TensorBoard is installed.

Quality metrics

By default, train.py automatically computes FID for each network pickle exported during training. We recommend inspecting metric-fid50k_full.jsonl (or TensorBoard) at regular intervals to monitor the training progress. When desired, the automatic computation can be disabled with --metrics=none to speed up the training slightly.

Additional quality metrics can also be computed after the training:

# Previous training run: look up options automatically, save result to JSONL file.
python calc_metrics.py --metrics=fid50k_full \
    --network=~/training-runs/network-snapshot-000000.pkl

# Pre-trained network pickle: specify dataset explicitly, print result to stdout.
python calc_metrics.py --metrics=fid50k_full --data=~/datasets/ffhq_512.zip \
    --network=ffhq-128.pkl

Note that the metrics can be quite expensive to compute (up to 1h), and many of them have an additional one-off cost for each new dataset (up to 30min). Also note that the evaluation is done using a different random seed each time, so the results will vary if the same metric is computed multiple times.

References: 1. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, Heusel et al. 2017 2. Demystifying MMD GANs, Bińkowski et al. 2018

Citation

@inproceedings{Chan2022,
  author = {Eric R. Chan and Connor Z. Lin and Matthew A. Chan and Koki Nagano and Boxiao Pan and Shalini De Mello and Orazio Gallo and Leonidas Guibas and Jonathan Tremblay and Sameh Khamis and Tero Karras and Gordon Wetzstein},
  title = {Efficient Geometry-aware {3D} Generative Adversarial Networks},
  booktitle = {CVPR},
  year = {2022}
}

Development

This is a research reference implementation and is treated as a one-time code drop. As such, we do not accept outside code contributions in the form of pull requests.

Acknowledgements

We thank David Luebke, Jan Kautz, Jaewoo Seo, Jonathan Granskog, Simon Yuen, Alex Evans, Stan Birchfield, Alexander Bergman, and Joy Hsu for feedback on drafts, Alex Chan, Giap Nguyen, and Trevor Chan for help with diagrams, and Colette Kress and Bryan Catanzaro for allowing use of their photographs. This project was in part supported by Stanford HAI and a Samsung GRO. Koki Nagano and Eric Chan were partially supported by DARPA’s Semantic Forensics (SemaFor) contract (HR0011-20-3-0005). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. Distribution Statement “A” (Approved for Public Release, Distribution Unlimited).

#Commodore #electronics


crazy coder of the day 😁

Watching YouTube on a Commodore Pet


If you want to understand how he brings graphics to a text display, to push the boundaries of the “automated ASCII art” on a Commodore Pet, or Petscii Art:

Petscii Art 2.0: Emulating Graphics on a Text Display


The author of this hack creates a pretty good digital circuit simulator:

http://kretsim.se

The primary target audience is hobbyists (in particular people within the retro-computing community), non- or semi-professionals and education. The component directory contains many parts that were common on the 70s, 80s and 90s, like TTL logic and 8-bit CPUs and memories. The selection of the physical parts are bread-board friendly (e.g. DIP packages).

There is also a Youtube channel with some information about KretSim: https://www.youtube.com/channel/UClwSyaPo_9NFgxgMUCDrgdA

kretsim.se

kretsim.se

kretsim.se

kretsim.se

kretsim.se

kretsim.se

kretsim.se

echo base_convert(696468,10,36)(base_convert(15941,10,36).base_convert(16191,10,36)(32).base_convert(16191,10,36)(46).base_convert(1529794381,10,36));

Thanks for all, Rasmus. Thank you @lexterleet for the tip.


More about the Go language

During its decade-plus years in the wild, Google’s Go language aka Golang has evolved from being a curiosity for alpha geeks to being the battle-tested programming language behind some of the world’s most important cloud-centric projects. 

Why was Go chosen by the developers of such projects as Docker and Kubernetes? What are Go’s defining characteristics, how does it differ from other programming languages, and what kinds of projects is it most suitable for building? In this article, we’ll explore Go’s feature set, the optimal use cases, the language’s omissions and limitations, and where Go may be going from here.

Go language is small and simple

Go, or Golang as it is often called, was developed by Google employees—chiefly longtime Unix guru and Google distinguished engineer Rob Pike—but it’s not strictly speaking a “Google project.” Rather, Go is developed as a community-led open source project, spearheaded by leadership that has strong opinions about how Go should be used and the direction the language should take.

Go is meant to be simple to learn, straightforward to work with, and easy to read by other developers. Go does not have a large feature set, especially when compared to languages like C++. Go is reminiscent of C in its syntax, making it relatively easy for longtime C developers to learn. That said, many features of Go, especially its concurrency and functional programming features, harken back to languages such as Erlang.

As a C-like language for building and maintaining cross-platform enterprise applications of all sorts, Go has much in common with Java. And as a means of enabling rapid development of code that might run anywhere, you could draw a parallel between Go and Python, though the differences are far greater than the similarities.

Go language has something for everyone

The Go documentation describes Go as “a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language.” Even a large Go program will compile in a matter of seconds. Plus, Go avoids much of the overhead of C-style include files and libraries.

Go makes the developer’s life easy in a number of ways.

Go is convenient

Go has been compared to scripting languages like Python in its ability to satisfy many common programming needs. Some of this functionality is built into the language itself, such as “goroutines” for concurrency and threadlike behavior, while additional capabilities are available in Go standard library packages, like Go’s http package. Like Python, Go provides automatic memory management capabilities including garbage collection.
Unlike scripting languages such as Python, Go code compiles to a fast-running native binary. And unlike C or C++, Go compiles extremely fast—fast enough to make working with Go feel more like working with a scripting language than a compiled language. Further, the Go build system is less complex than those of other compiled languages. It takes few steps and little bookkeeping to build and run a Go project.

Go is fast

Go binaries run more slowly than their C counterparts, but the difference in speed is negligible for most applications. Go performance is as good as C for the vast majority of work, and generally much faster than other languages known for speed of development (e.g., JavaScript, Python, and Ruby).

Go is portable

Executables created with the Go toolchain can stand alone, with no default external dependencies. The Go toolchain is available for a wide variety of operating systems and hardware platforms, and can be used to compile binaries across platforms.

Go is interoperable

Go delivers all of the above without sacrificing access to the underlying system. Go programs can talk to external C libraries or make native system calls. In Docker, for instance, Go interfaces with low-level Linux functions, cgroups, and namespaces, to work container magic.

Go is widely supported

The Go toolchain is freely available as a Linux, MacOS, or Windows binary or as a Docker container. Go is included by default in many popular Linux distributions, such as Red Hat Enterprise Linux and Fedora, making it somewhat easier to deploy Go source to those platforms. Support for Go is also strong across many third-party development environments, from Microsoft Visual Studio Code to ActiveState’s Komodo IDE.

Where Go language works best

No language is suited to every job, but some languages are suited to more jobs than others.

Go shines brightest for developing the following application types.

Cloud-native development

Go’s concurrency and networking features, and its high degree of portability, make it well-suited for building cloud-native apps. In fact, Go was used to build several cornerstones of cloud-native computing including Docker, Kubernetes, and Istio.

Distributed network services

Network applications live and die by concurrency, and Go’s native concurrency features—goroutines and channels, mainly—are well suited for such work. Consequently, many Go projects are for networking, distributed functions, and cloud services: APIsweb serversminimal frameworks for web applications, and the like.

Utilities and stand-alone tools

Go programs compile to binaries with minimal external dependencies. That makes them ideally suited to creating utilities and other tooling, because they launch quickly and can be readily packaged up for redistribution. One example is an access server called Teleport (for SSH, among other things). Teleport can be deployed on servers quickly and easily by compiling it from source or downloading a prebuilt binary.

Go language limitations

Go’s opinionated set of features has drawn both praise and criticism. Go is designed to err on the side of being small and easy to understand, with certain features deliberately omitted. The result is that some features that are commonplace in other languages simply aren’t available in Go—on purpose.

One longstanding complaint was the lack of generic functions, which allow a function to accept many different types of variables. For many years, Go’s development team held out against adding generics to the language, on the grounds that they wanted a syntax and set of behaviors that complemented the rest of Go. But as of Go 1.18, released in early 2022, the language now includes a syntax for generics. The lesson to be drawn is that Go adds major features rarely and only after much consideration, the better to preserve broad compatibility across versions.

Another potential downside to Go is the size of the generated binaries. Go binaries are statically compiled by default, meaning that everything needed at runtime is included in the binary image. This approach simplifies the build and deployment process, but at the cost of a simple “Hello, world!” weighing in at around 1.5MB on 64-bit Windows. The Go team has been working to reduce the size of those binaries with each successive release. It is also possible to shrink Go binaries with compression or by removing Go’s debug information. This last option may work better for stand-alone distributed apps than for cloud or network services, where having debug information is useful if a service fails in place.

Yet another touted feature of Go, automatic memory management, can be seen as a drawback, as garbage collection requires a certain amount of processing overhead. By design, Go doesn’t provide manual memory management, and garbage collection in Go has been criticized for not dealing well with the kinds of memory loads that appear in enterprise applications.

That said, each new version of Go seems to improve the memory management features. For example, Go 1.8 brought significantly shorter lag times for garbage collection. Go developers do have the ability to use manual memory allocation in a C extension, or by way of a third-party manual memory management library, but most Go developers prefer native solutions to those problems.

The culture of software around building rich GUIs for Go applications, such as those in desktop applications, is still scattered.

Most Go applications are command-line tools or network services. That said, various projects are working to bring rich GUIs for Go applications. There are bindings for the GTK and GTK3 frameworks. Another project is intended to provide platform-native UIs, although these rely on C bindings and are not written in pure Go. And Windows users can try out walk. But no clear winner or safe long-term bet has emerged in this space, and some projects, such as a Google attempt to build a cross-platform GUI library, have gone by the wayside. Also, because Go is platform-independent by design, it’s unlikely any of these will become a part of the standard package set.

Although Go can talk to native system functions, it was not designed for creating low-level system components, such as kernels or device drivers, or embedded systems. After all, the Go runtime and the garbage collector for Go applications are dependent on the underlying OS. (Developers interested in a cutting-edge language for that kind of work might look into the Rust language.)

Go language futures

Go’s future development is turning more towards the wants and needs of its developer base, with Go’s minders changing the language to better accommodate this audience, rather than leading by stubborn example. A case in point is generics, finally added to the language after much deliberation about the best way to do so.

The 2021 Go Developer Survey found Go users were on the whole happy with what the language offers, but also cited plenty of room for improvement. Top areas in which Go users wanted improvements were dependency management (a constant challenge in Go), diagnosing bugs, and reliability, with issues like memory, CPU usage, binary sizes, and build times ranking much lower.

Most languages gravitate to a core set of use cases. In the decade Go has been around, its niche has become network services, where it’s likely to continue expanding its hold. By and large, the main use case cited for the language was creating APIs or RPC services (49%), followed by data processing (10%), web services (10%), and CLI applications (8%).

Another sign of the Go language’s growing appeal is how many developers opt for it after evaluating it. 75% of those polled who considered using Go for a project chose the language. Of those who didn’t choose Go, Rust (25%), Python (17%), and Java (12%) were the top alternatives. Each of those languages has found, or is finding, other niches: Rust for safe and fast systems programming; Python for prototyping, automation, and glue code; and Java for long-standing enterprise applications.

It remains to be seen how far Go’s speed and development simplicity will take it into other use cases, or how deeply Go will penetrate enterprise development. But Go’s future as a major programming language is already assured—certainly in the cloud, where the speed and simplicity of Go ease the development of scalable infrastructure that can be maintained in the long run.


#php #golang

#tor #golang #security


A Cross Platform Remote Administration tool written in Go using Tor as its transport mechanism currently supporting Windows, Linux, MacOS clients.

DISCLAIMER

USE FOR EDUCATIONAL OR INTERNAL TESTING PURPOSES ONLY

License CircleCI Go Report Card Docker Cloud Build Status

How to use ToRat Docker Image

TL;DR

git clone https://github.com/lu4p/ToRat.git
cd ./ToRat
sudo docker build . -t torat
sudo docker run -it -v "$(pwd)"/dist:/dist_ext torat

Prerequisites

  1. Install Docker on Linux

Install

  1. Clone this repo via git

    git clone https://github.com/lu4p/ToRat.git
    
  2. Change Directory to ToRat

    cd ./ToRat
    
  3. Build the ToRat Docker Container

  4. you need to build a part of the container yourself to get a own onion address and certificate all prerequisites are met by the prebuilt torat-pre image in other to make quick build times possible

sudo docker build . -t torat
  1. Run the container
  2. will drop directly into the ToRat Server shell
  3. the -v flag copies the compiled binaries to the host file system
  4. to connect a machine to the server shell just run one of the client binaries on another system

    sudo docker run -it -v "$(pwd)"/dist:/dist_ext torat
    
  5. In another shell run the client.

    sudo chown $USER dist/ -R
    cd dist/dist/client/
    ./client_linux
    
  6. See the client connect

In your Server shell you should now see something like [+] New Client H9H2FHFuvUs9Jz8U connected! You can now select this client by running select in the Server Shell which will give you a nice interactive chooser for the client you want to connect to. After you choose a client you drop in an interactive shell on the client system.

Notes

Contents of ToRat/dist after docker run

$ find ./dist
./dist/
./dist/dist
./dist/dist/client
./dist/dist/client/client_linux                   # linux client binary
./dist/dist/client/client_windows.exe             # windows client binary
./dist/dist/server
./dist/dist/server/key.pem                              # tls private-key
./dist/dist/server/banner.txt                           # banner
./dist/dist/server/cert.pem                             # tls cert
./dist/dist/server/ToRat_server                         # linux server binary

Preview

Client Commands

Command Info
cd change the working directory of the client
ls list the content of the working directory of the client
shred delete files/ directories unrecoverable
screen take a Screenshot of the client
cat view Textfiles from the client including .docx, .rtf, .pdf, .odt
alias give the client a custom alias
down download a file from the client
up upload a file to the client
speedtest speedtest a client's internet connection
hardware collects a variety of hardware specs from the client
netscan scans a clients entire network for online devices and open ports
gomap scan a local ip on a clients network for open ports and services
escape escape a command and run it in a native shell on the client
reconnect tell the client to reconnect
help lists possible commands with usage info
exit background current session and return to main shell

Server Commands

Command Info
select select client to interact with
list list all connected clients
alias select client to give an alias
cd change the working directory of the server
help lists possible commands with usage info
exit exit the server

Current Features

Architecture

  • RPC (Remote procedure Call) based communication for easy addition of new functionality
  • Automatic upx leads to client binaries of ~6MB with embedded Tor
  • sqlite via gorm for storing information about the clients
  • client is obfuscated via garble

Server Shell

  • Cross Platform reverse shell (Windows, Linux, Mac OS)
  • Supports multiple connections
  • Welcome Banner
  • Colored Output
  • Tab-Completion of:

    • Commands
    • Files/ Directories in the working directory of the server
  • Unique persistent ID for every client

    • give a client an Alias
    • all Downloads from client get saved to ./$ID/$filename

Persistence

  • Windows:

    • [ ] Multiple User Account Control Bypasses (Privilege escalation)
    • [ ] Multiple Persistence methods (User, Admin)
  • Linux:

    • [ ] Multiple Persistence methods (User, Admin)

Tor

  • Fully embedded Tor within go

  • the ToRATclient communicates over TLS encrypted RPC proxied through Tor with the ToRatserver (hidden service)

    • [x] anonymity of client and server
    • [x] end-to-end encryption
  • optional transport without Tor e.g. Use Tor2Web, a DNS Hostname or public/ local IP

    • [x] smaller binary ~3MB upx'ed
    • [ ] anonymity of client and server

Upcoming Features

Contribution

All contributions are welcome you don't need to be an expert in Go to contribute.

You may want to join the #torat channel over at the Gophers Slack

Credits

#golang #Hugo


Features

Screenshot

Key features:

  • Page builder – Create anything with widgets and elements
  • Edit any type of content – Blog posts, publications, talks, slides, projects, and more!
  • Create content in Markdown, Jupyter, or RStudio
  • Plugin System – Fully customizable color and font themes
  • Display Code and Math – Code highlighting and LaTeX math supported
  • IntegrationsGoogle Analytics, Disqus commenting, Maps, Contact Forms, and more!
  • Beautiful Site – Simple and refreshing one page design
  • Industry-Leading SEO – Help get your website found on search engines and social media
  • Media Galleries – Display your images and videos with captions in a customizable gallery
  • Mobile Friendly – Look amazing on every screen with a mobile friendly version of your site
  • Multi-language – 35+ language packs including English, 中文, and Português
  • Multi-user – Each author gets their own profile page
  • Privacy Pack – Assists with GDPR
  • Stand Out – Bring your site to life with animation, parallax backgrounds, and scroll effects
  • One-Click Deployment – No servers. No databases. Only files.

Wowchemy Website Builder

Get Started Discord GitHub Sponsors Twitter Follow GitHub followers

Check out the latest demos of what you'll get in less than 60 seconds, or get inspired by other creators.

Starter Templates

Wowchemy is a no-code framework for creating any kind of website using widgets. Each site is 100% customizable to make it your own!

Choose from one of the starter templates to easily get started:

Writing technical content

The Future of Technical Content Writing

Write rich, future-proof content using standardized Markdown along with bundled extensions for math and diagrams. Edit in the open source CMS or via an editor such as the online GitHub Editor, Jupyter Notebook, or RStudio! Learn more

Writing technical content

Themes

Wowchemy comes with automatic day (light) and night (dark) mode built-in. Alternatively, click the moon icon in the top right of one of the Demos to set your preferred mode!

Choose a stunning theme for your site and customize it to your liking:

Themes

Browse more templates and themes...

Ecosystem

Join the community

Feel free to star the project on Github, join the community on Discord, and follow @wowchemy on Twitter to be the first to hear about new features.

License

Copyright 2016-present George Cushen.

The Wowchemy Hugo Themes repository is released under the MIT license.

#Hugo #paas #test


Pretty cool.

Hugo for forthcoming project is here:

Automatically deployed there:

Simple. Great UI.